Images, posts & videos related to "Lambda calculus"
Lambda calculus is the foundation of functional languages, in the sense that it can almost be seen as a subset of a functional language, up to minor syntactic change.
Lambda calculus is also a model of computation, according to https://en.wikipedia.org/wiki/Model_of_computation.
Is there a model of computation which has similar influence to imperative languages, as Lambda calculus to functional languages?
Turing machine is used for computability study. Is writing a program on a universal Turing machine like writing a program in a procedural language?
RAM is used for complexity study. Does it have the same influence to procedural imperative languages as Lambda calculus to functional languages?
Thanks.
So I had this idea for an esoteric programming language. It's not exactly a complex idea, but I have a rough idea of the semantics. Let me know if there's anything I missed, or that you think I should add.
So we have all the features of lambda calculus, which (if I'm remembering correctly) are:
So I'm probably not 100% on that, so feel free to clear up in comments.
Right, onto language semantics.
We have a simplified (I know, simpler than 8?) Set of the brainfuck characters:
All of the brainfuck operators (with the exception of "_") are lambda functions that take one parameter, perform their intended side effect, then return the given parameter. So in a sense they are the function Identity function (\x: x) but they perform side effects.
That's the brainfuck stuff out the the way. We don't have the angle brackets "[" and "]" because lambda calculus can perform recursion, so the loops with them aren't required.
Now for lambda calculus semantics. Sticking within the theme of brain fuck, let's make them all single characters:
White space is used as a separator for parameters in a function definition, and those passed into a function.
So, that's pretty much it. 14 symbols and a functional programming language. Note that as with lambda calculus, alpha conversion and beta reduction would be performed. But that would likely be done by the compiler, unless you wish to do it manually at the time of writing.
In terms of use
... keep reading on reddit β‘In lambda calculus and other functional languages, what is the granularity of specifying applicative or normal evaluation order:
I also seem to remember in some functional language (Haskell?), the evaluation order is a language wide choice. By default, it is either applicative or normal order, and it can be switched by a statement in programs. When it is switched, will the evaluation order for all the functions defined both before and after the switch be changed? Can we switch the evaluation order for all the calls to a function, without affecting the calls to other functions? Can we switch the evaluation order of a function call, without affecting other function calls?
I have the above question, because for the normal order sequencing combinator Seq
, when evaluating ((Seq (display βhelloβ)) (display βworldβ))
to (Ξ»z.(display βworldβ) (display βhelloβ)
, I am wondering if (Ξ»z.(display βworldβ) (display βhelloβ)
is evaluated in applicative or normal order, since I guess both "hello" and "world" are supposed to be printed but in normal order "hello" isn't printed.
Thanks.
Hi
I'm experimenting with the untyped lambda calculus in Coq for preparation in formalizing some compiler IRs for a permanently WIP compiler I'm working in.
I need some advice on what styles work best in Coq and how to abuse the notation system for embedded DSLs.
I'm messing around and I'm not sure what works and what's dumb.
I vacilitate between various flavors of variable binders and I'm still learning what's powerful and convenient.
Parametric higher order abstract syntax has its good points but is very painful in some other ways and these issues super-size when experimenting with dependent types.
Right now I was playing with thinking of variable substitution in terms of repeated substitution of one hole contexts. It kind of makes sense? I thought maybe finding some core mathematical "what is it" would make things nice but honestly I don't think I've got the "what is it" if its there.
The stuff I've got right now doesn't look very nice and doesn't look very ready for proving anything sensible in. I have some vague idea about alpha renaming/recycling of variable binders/resource management with some linear comonoid like stuff but that doesn't really work well Coq which of course doesn't support linear types or at least not easily. Some sort of region encoding seems possible but might be pretty ugly.
Some particular nits:
I'm not sure how to handle partial functions well such as D
which finds a one hole context if possible. I'd really like the "hole" to be an implicit parameter but this just doesn't work well when you have an option type in the way. I might be able to shift things around if a proof a variable occurs in the body was passed in but this sounds ugly.
I might be abusing sections too much. Not sure.
I don't even use the whole coinductive list approximations stuff in the interpreter. Not sure of Something that would work extracted to a language.
Not sure how to make Eval reduce more stuff. I wish I could leave the variable binder type abstract but I don't think that's possible for Eval.
I think I'm going to need to pass around a sort of partial order along with "split" enforcing that the cloned children of a variable binder should not compare equal to it. Not quite sure how mathematize a parent children relationship with "split".
https://gist.github.com/sstewartgallus/42e23fbb1de94475ec2cd44569be39fd
In lambda calculus,
is a lambda abstraction always a value and therefore not evaluated further, regardless of whether the body of the abstraction can be further evaluated?
In other words, is eta-reduction by default not performed? Are only beta-reduction and alpha-conversion performed by default?
Are the answers to the above questions the same for both applicative order and normal order evaluations?
Thanks.
I am trying to implement normal order reduction of lambda calculus in Haskell however i m facing issues with two functions, that are supposed to implement call by name and the normal order reduction.
data LExpr = Var String
| App LExpr LExpr
| Lam String LExpr
newVar :: [String] -> String
newVar ls = go "z"
where go s | filter (==s) ls==[] = s
| otherwise = go (s++"'")
subst :: LExpr -> String -> LExpr -> LExpr
subst m x (App e1 e2) = App(subst m x e1) (subst m x e2)
subst m x (Var str) | str == x = m
| otherwise = Var str
subst m x (Lam str e) | str == x = Lam str e
| str/= x && not (elem str (free m)) = Lam str (subst m x e)
| otherwise = subst m x ((\y-> Lam y (subst (Var y) str e)) (newVar (free m)))
This is the function I've tried to implement for lazy evaluation (normal order reduction) however it doesn't correctly
lazy :: LExpr -> LExpr
lazy = flip go []
where
go (App e1 e2) ls = go e1 (e2:ls)
go (Lam y e) (l:ls) = go (subst l y e) ls
go [email protected](Lam y e) [] = la
go [email protected](Var _) ls = foldl App v ls
e = Lam "y" (App (Var "f") (App (Var "x") (Var "y")))
m = App (Var "g") (Var "y")
I'd really would appreciate some advice and help on how to fix this function. Thank you
Edit : Included the rest of my code
As much as i read and try to understand Lambda Calculus i just canβt. Can someone explain it to me in the most simple way to at least understand the basics of it :D
I couldn't find many small implementations of a simply-typed lambda calculus, so I decided to create one in F#. This might be useful for anyone interested in the basics of implementing a functional programming language.
I am creating a lambda calculus parser for fun, and I'm running into some issues because I am not very experienced with Parsec. The code I wrote to parse expressions is below:
data Expression
= Variable String
| Abstraction Expression Expression
| Application Expression Expression
parens :: Parser a -> Parser a
parens = between (char '(') (char ')')
expression :: Parser Expression
expression = parens expression <|> abstraction <|> try application <|> variable
abstraction :: Parser Expression
abstraction =
Abstraction
<$> between (char '\\') (char '.') variable
<*> expression
application :: Parser Expression
application =
Application
<$> expression
<*> (space >> expression)
variable :: Parser Expression
variable = Variable <$> many1 alphaNum
There are several issues with this code:
When it tries to parse an application, it runs into an infinite loop because it can never parse the first expression as a variable because it tries to parse an application first (I can explain this better if you need) If I put variable before application, it never parses an application because variable succeeds, so it doesn't consume the whole input.
Many parentheses are needed because applications have no associativity (I think I'm using that word right)
Any help would be appreciated
Wikipedia tells me that lambda calculus "is a universal model of computation that can be used to simulate any Turing machine."
Does that mean that a large software package (say, League of Legends) could be represented as a set of lambda expressions? (not that it would be practical to represent it that way)
Hey everyone!
This is a project I worked on during the start of the summer, and after some rewrites of the front end I think it's in a good enough state to share with people. The interpreter is written in TypeScript and is my first major PL project, so if you have constructive criticism/advice I'd appreciate it. If you want to play around with the interpreter, it's available to try on lambster.dev or you can use the CLI by installing it via npm.
Please let me know what you think. Thanks!
Links:
Lambster: https://lambster.dev
Interpreter source: https://github.com/minchingtonak/lambster
Website source: https://github.com/minchingtonak/lambster.dev
I have written a little paper:
Programming in Lambda Calculus
The text looks at lambda calculus from a programmers point of view. Everyone with some programming experience should be able to read and understand the text.
Special effort is given on how to represent algebraic data types in lambda calculus.
I hope some of you might enjoy the paper.
Hi there.
I've been recently learning on the subject of the "Propositions as Types" paradigm which stems from the Curry-Howard correspondence, but can't seem to find online material with beginner-friendly explanations of the "next steps" following the correspondence of Simply-typed Lambda calculus and Propositional calculus (a.k.a. zeroth-order logic). Therefore, I would appreciate help with some questions and/or pointers to resources which may answer them.
From what I understand so far, adding types to Lambda calculus revokes its Turing-completeness as it becomes "strongly normalizing", meaning every well-typed program terminates through reduction to a normal form (question 1: proof?). It is then pretty straightforward to see that its type system corresponds to propositional calculus with implication (->). So far so good.
At this point begins the logic system rabbit-hole.
Further reading on different logics made me wonder, which logic is the correspondence to, exactly. It is clearly not classical logic as that seems to require call/cc
or some other control mechanism and it seems like the answer is Intuitionistic propositional logic (question 2: which variant?). However, I don't see where are the other logic connectives or a clear listing of the intuitionistic axioms and their corresponding rules in simply-typed lambda calculus.
It seems natural to extend the Simply-typed Lambda calculus to a more powerful logic. For instance, adding a product type (*) gets us the corresponding of logical conjuction (AND). Logical disjunction (OR) then, seems to find its type-system correspondent in sum types (a.k.a disjoint unions). I then wonder (question 3) why have functional languages such as OCaml and Haskell chosen sum types (a.k.a Tagged unions) to represent these instead of the "more usual" non-disjoint union (e.g. TypeScript union types). I guess this could be related to the constructive necessity of proofs in the view of intuitionistic logic, but I'm
... keep reading on reddit β‘In most literature explaining functional programming that I have read it is often stated that functions in Haskell are mathematical functions. That the notion of function as it is in Haskell is what exist in mathematics (this is done to differentiate from methods/procedures).
It is also stated that Haskell is an implementation of the lambda calculus.
I am not a mathematician but I am aware that functions in mathematics can be derived from set theory. Where a function is a mapping of values from a set called domain to another set called codomain. This then leads to 3 categories of functions: Surjective, Injective and Bijective
Also from the simplified explanation of Lambda calculus that I have read, it is stated that The lambda calculus has three basic components, or lambda terms:expressions, variables, and abstractions. Where abstractions is similar to the knowing of a function.
The question now is...is abstraction as defined in Lambda calculus one and the same as function as arrived via set theory. I am not sure as the few explanation of Lambda calculus I have read did not clarify this if this relationship exist or not. For example in materials I have read it is not explicitly stated if the abstraction in lambda calculus enforces the constraints that says x can map to one and only one y
Thoughts?
Hello I'm a lisp beginner and just saw this talk on Lambda Calculus and I like how the basics are explained. The examples are written in JavaScript how would I write them in common lisp(especially the combinators)?From what I see its better to bind a lambda to a variable with defparameter
or setq
rather then use defun
so they return a lambda.
I tried:
(defpackage lambda
(:use :cl))
(in-package :lambda)
;; IDIOT - Ξ»a.a
(defparameter I (lambda (a) a))
;; MOCKINGBIRD - Ξ»ff.f
(defparameter M (lambda (f) (funcall f f)))
;; KESTREL - Ξ»ab.a (Doesn't work - just returns the last value)
(defparameter K (lambda (a b) a))
But this seems wrong and I'm already stuck on KESTREL as it just return the last value(as common lisp does). And KESTREL should be curried from what I gather and have a lambda that take a that invokes a lambda that takes b and returns a. Not sure how to do that.
I can now invoke I I
and M I
in without surrounding parens in the slime repl which if i understand the talk should have happened. (not sure of I
and M
being correctly implemented eighter)
How do write the js for F => a => b => a + b
in common lisp?
Any help is greatly appreciated.
It is well known one can compile the lambda calculus to Cartesian closed categories.
Dually it is possible to interpret the lambda calculus in reverse and compile to co-CCCs.
I took an existing compiler I had made and flipped all the guts around and now I can compile from simple higher order abstract syntax https://github.com/sstewartgallus/prologish/blob/master/src/Hoas.hs to a simple categorical structure https://github.com/sstewartgallus/prologish/blob/master/src/Mal.hs .
Products become like sums and sums become like products and the result is truly horrendously confusing.
https://github.com/sstewartgallus/prologish/blob/master/app/Main.hs
I also wrote a simple interpreter https://github.com/sstewartgallus/prologish/blob/master/src/AsEval.hs that nterprets the coexponential type using continuations but finding a sensible computational interpretation is even more befuddling.
I think it should be possible to do some Prolog like stuff with this but so far I am just confused.
The obvious choice of counting beta-reductions does not give a reasonable cost model for the lambda calculus due to the possibly arbitrarily large complexity of the body (RHS) of lambda applications. This leads me to consider a K-restricted version of the LC where applications have at most K occurrences of the argument on the RHS. Some questions naturally follow:
Thoughts or suggestions for further reading?
Hi
Don't know if I posted this as its own post lately but I'm feeling down and need encouragement.
I rewrote much of an existing compiler ( https://github.com/sstewartgallus/jsystemf ) in Java to Haskell after getting frustrated. I'll probably see if I can get this to work in Frege.
The current compiler is here.
https://github.com/sstewartgallus/hs-callbypushvalue/tree/master/src
The original idea was to get a lazy functional tail recursive language working on the JVM and to go really fast. I am very far from that goal but still have some interesting results anyway.
I chose System F as the basic IR to work from because it's pretty universal and I got stuck experimenting and figuring out my own language ideas. Basically everything can compile to System F anyway.
I compiled to call by push value because strictness analysis looked complicated and I needed something like A Normal Form anyway.
Call by push value just seems like a natural language to work in for dealing with thunks and a non termination side effect. was hoping to compile to JVM bytecode at runtime from a high level system F AST so with inlining at runtime I wouldn't need to do strictness analysis. The difficulties with making a raw Java compiler probably put a damper on that unless Frege will work well enough.
Next I compile to Call By Push Value with explicit Callcc like methods. Thunks can be reduced to a sort of double negation of a value in the type of stacks (the call by push value variant of continuations) at this point. I added a separate step instead of compiling directly to cps because cps constantly confused me throughout this project. The Callcc intermediate stage was easier to get right.
After this point I compile to continuation passing style (stack passing style actually but same thing.) I compile to continuation passing style because to emulate tail calls on the JVM I would need to use a similar construct anyway. I hoped if I worked directly in this context in the compiler the runtime could be a lot simpler.
I looked at various papers on how to implement tail calls performantly on the JVM and settled on a sort of explicit continuation approach.
Every function is passed in an explicit Continuation object that has a counter of the depth of the call stack. Every time we call a new continuation we increment and check a counter. If the counter is too large we save our state into the object and unwind the call stack. Unless the current state is a primitive no extra boxing/a
... keep reading on reddit β‘So, self application is defined as M = Ξ»f. f f
.
Now, let ADD = Ξ»x. Ξ»y. x + y
So,
M ADD = (Ξ»f. f f) ADD
= ADD ADD
= (Ξ»x. Ξ»y. x + y) ADD
= Ξ»y. ADD + y
As you can see, it has become nonsensical. How does self-application work with binary functions.
If we apply one beta-reduction with renaming to the expression
( πx. (πy. x) x ) y
we can get
( (πw. y) y )
but shouldn't the answer be
( (πw. x) y )
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.