Lambda_Calculus(Pure_Data(htam_wrong_variable * dreamFunction(CIRCLE)))
πŸ‘︎ 329
πŸ“°︎ r/generative
πŸ’¬︎
πŸ‘€︎ u/Yusaku-Midory
πŸ“…︎ Feb 20 2021
🚨︎ report
Is there a model of computation which has similar influence to imperative languages, as Lambda calculus to functional languages?

Lambda calculus is the foundation of functional languages, in the sense that it can almost be seen as a subset of a functional language, up to minor syntactic change.

Lambda calculus is also a model of computation, according to https://en.wikipedia.org/wiki/Model_of_computation.

Is there a model of computation which has similar influence to imperative languages, as Lambda calculus to functional languages?

  • Turing machine is used for computability study. Is writing a program on a universal Turing machine like writing a program in a procedural language?

  • RAM is used for complexity study. Does it have the same influence to procedural imperative languages as Lambda calculus to functional languages?

Thanks.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/timlee126
πŸ“…︎ Feb 22 2021
🚨︎ report
What is an algorithm from a theoretical CS standpoint? Like in the context of FSM's, Turing Machines Lambda Calculus etc, what is an algorithm, or more generally a process?
πŸ‘︎ 24
πŸ’¬︎
πŸ‘€︎ u/gtboy1994
πŸ“…︎ Jan 01 2021
🚨︎ report
Lambda Calculus with Brainfuck for side effects

LambFuck

So I had this idea for an esoteric programming language. It's not exactly a complex idea, but I have a rough idea of the semantics. Let me know if there's anything I missed, or that you think I should add.

So we have all the features of lambda calculus, which (if I'm remembering correctly) are:

  • Variables - which can be letters or words or even numbers, because we don't have those in lambda calculus.
  • Abstraction - this is our function definitions, which can use variables as parameters
  • Application - this is our function calls, how we apply functions to arguments

So I'm probably not 100% on that, so feel free to clear up in comments.

Right, onto language semantics.

We have a simplified (I know, simpler than 8?) Set of the brainfuck characters:

  • ">" - increments the data pointer
  • "<" - decrements the data pointer
  • "+" - increments the byte at the data pointer
  • "-" - decrements the byte at the data pointer
  • "!" - outputs the byte at the data pointer (edited)
  • "?" - stores 1 input byte at the data pointer (tweaked from "," because that's needed later) (edited)
  • "_" - checks if the byte at the data pointer is 0, and returns TRUE (\x, y: x) if it equals 0, or FALSE (\x, y: y) if it doesn't.

All of the brainfuck operators (with the exception of "_") are lambda functions that take one parameter, perform their intended side effect, then return the given parameter. So in a sense they are the function Identity function (\x: x) but they perform side effects.

That's the brainfuck stuff out the the way. We don't have the angle brackets "[" and "]" because lambda calculus can perform recursion, so the loops with them aren't required.

Now for lambda calculus semantics. Sticking within the theme of brain fuck, let's make them all single characters:

  • "\" - indicates the definition of a function
  • ":" - indicates the end of the parameter list
  • ";" - defines the end of a function definition
  • "=" - used to assign functions to variables
  • "#" - defines a comment
  • "(" & ")" - used to disambiguate variables in function application.

White space is used as a separator for parameters in a function definition, and those passed into a function.

So, that's pretty much it. 14 symbols and a functional programming language. Note that as with lambda calculus, alpha conversion and beta reduction would be performed. But that would likely be done by the compiler, unless you wish to do it manually at the time of writing.

In terms of use

... keep reading on reddit ➑

πŸ‘︎ 74
πŸ’¬︎
πŸ‘€︎ u/MADH95
πŸ“…︎ Nov 29 2020
🚨︎ report
What is the granularity of specifying applicative or normal evaluation order in lambda calculus (and functional languages)?

In lambda calculus and other functional languages, what is the granularity of specifying applicative or normal evaluation order:

  • language-wide (for all the function calls),
  • individual functions (for all the function calls for a given function),
  • individual function calls?

I also seem to remember in some functional language (Haskell?), the evaluation order is a language wide choice. By default, it is either applicative or normal order, and it can be switched by a statement in programs. When it is switched, will the evaluation order for all the functions defined both before and after the switch be changed? Can we switch the evaluation order for all the calls to a function, without affecting the calls to other functions? Can we switch the evaluation order of a function call, without affecting other function calls?

I have the above question, because for the normal order sequencing combinator Seq, when evaluating ((Seq (display β€œhello”)) (display β€œworld”)) to (Ξ»z.(display β€œworld”) (display β€œhello”), I am wondering if (Ξ»z.(display β€œworld”) (display β€œhello”) is evaluated in applicative or normal order, since I guess both "hello" and "world" are supposed to be printed but in normal order "hello" isn't printed.

Thanks.

πŸ‘︎ 2
πŸ“°︎ r/compsci
πŸ’¬︎
πŸ‘€︎ u/timlee126
πŸ“…︎ Feb 12 2021
🚨︎ report
Simple untyped lambda calculus interpreter

Hi

I'm experimenting with the untyped lambda calculus in Coq for preparation in formalizing some compiler IRs for a permanently WIP compiler I'm working in.

I need some advice on what styles work best in Coq and how to abuse the notation system for embedded DSLs.

I'm messing around and I'm not sure what works and what's dumb.

I vacilitate between various flavors of variable binders and I'm still learning what's powerful and convenient.

Parametric higher order abstract syntax has its good points but is very painful in some other ways and these issues super-size when experimenting with dependent types.

Right now I was playing with thinking of variable substitution in terms of repeated substitution of one hole contexts. It kind of makes sense? I thought maybe finding some core mathematical "what is it" would make things nice but honestly I don't think I've got the "what is it" if its there.

The stuff I've got right now doesn't look very nice and doesn't look very ready for proving anything sensible in. I have some vague idea about alpha renaming/recycling of variable binders/resource management with some linear comonoid like stuff but that doesn't really work well Coq which of course doesn't support linear types or at least not easily. Some sort of region encoding seems possible but might be pretty ugly.

Some particular nits:

  • I'm not sure how to handle partial functions well such as D which finds a one hole context if possible. I'd really like the "hole" to be an implicit parameter but this just doesn't work well when you have an option type in the way. I might be able to shift things around if a proof a variable occurs in the body was passed in but this sounds ugly.

  • I might be abusing sections too much. Not sure.

  • I don't even use the whole coinductive list approximations stuff in the interpreter. Not sure of Something that would work extracted to a language.

  • Not sure how to make Eval reduce more stuff. I wish I could leave the variable binder type abstract but I don't think that's possible for Eval.

  • I think I'm going to need to pass around a sort of partial order along with "split" enforcing that the cloned children of a variable binder should not compare equal to it. Not quite sure how mathematize a parent children relationship with "split".

https://gist.github.com/sstewartgallus/42e23fbb1de94475ec2cd44569be39fd

πŸ‘︎ 6
πŸ“°︎ r/Coq
πŸ’¬︎
πŸ‘€︎ u/Molossus-Spondee
πŸ“…︎ Feb 13 2021
🚨︎ report
Fun with Lambda Calculus stopa.io/post/263
πŸ‘︎ 200
πŸ“°︎ r/programming
πŸ’¬︎
πŸ‘€︎ u/stepanp
πŸ“…︎ Oct 19 2020
🚨︎ report
Is eta-reduction by default not performed in lambda calculus?

In lambda calculus,

  • is a lambda abstraction always a value and therefore not evaluated further, regardless of whether the body of the abstraction can be further evaluated?

  • In other words, is eta-reduction by default not performed? Are only beta-reduction and alpha-conversion performed by default?

Are the answers to the above questions the same for both applicative order and normal order evaluations?

Thanks.

πŸ‘︎ 3
πŸ“°︎ r/compsci
πŸ’¬︎
πŸ‘€︎ u/timlee126
πŸ“…︎ Feb 12 2021
🚨︎ report
What is an algorithm from a theoretical CS standpoint? Like in the context of FSM's, Turing Machines Lambda Calculus etc, what is an algorithm, or more generally a process? /r/AskComputerScience/com…
πŸ‘︎ 22
πŸ“°︎ r/algorithms
πŸ’¬︎
πŸ‘€︎ u/gtboy1994
πŸ“…︎ Jan 01 2021
🚨︎ report
How to implement Normal Order Reduction for Lambda Calculus?

I am trying to implement normal order reduction of lambda calculus in Haskell however i m facing issues with two functions, that are supposed to implement call by name and the normal order reduction.

 data LExpr = Var String       
           | App LExpr LExpr   
           | Lam String LExpr   


 newVar :: [String] -&gt; String
 newVar ls = go "z"
    where go s | filter (==s) ls==[] = s
               | otherwise = go (s++"'")


 subst :: LExpr -&gt; String -&gt; LExpr -&gt; LExpr
 subst m x (App e1 e2) = App(subst m x e1) (subst m x e2)
 subst m x (Var str) | str == x = m
                     | otherwise = Var str
 subst m x (Lam str e) | str == x = Lam str e
                       | str/= x &amp;&amp; not (elem  str (free m))   = Lam str (subst m x e)
                       | otherwise = subst m x ((\y-&gt; Lam y (subst (Var y) str e)) (newVar (free m)))

This is the function I've tried to implement for lazy evaluation (normal order reduction) however it doesn't correctly

 lazy :: LExpr -&gt; LExpr
 lazy  = flip go []
   where
         go (App e1 e2) ls   = go e1 (e2:ls)
         go (Lam y e) (l:ls) = go (subst l y e) ls
         go [email protected](Lam y e) []  = la
         go [email protected](Var _)    ls  = foldl App v ls

 e = Lam "y" (App (Var "f") (App (Var "x") (Var "y")))
 m = App (Var "g") (Var "y")

I'd really would appreciate some advice and help on how to fix this function. Thank you

Edit : Included the rest of my code

πŸ‘︎ 2
πŸ“°︎ r/haskellquestions
πŸ’¬︎
πŸ‘€︎ u/ChevyAmpera
πŸ“…︎ Jan 20 2021
🚨︎ report
Lambda Calculus

As much as i read and try to understand Lambda Calculus i just can’t. Can someone explain it to me in the most simple way to at least understand the basics of it :D

πŸ‘︎ 11
πŸ“°︎ r/haskell
πŸ’¬︎
πŸ‘€︎ u/eIisus
πŸ“…︎ Dec 08 2020
🚨︎ report
Simply-typed lambda calculus in F#

Github repository

I couldn't find many small implementations of a simply-typed lambda calculus, so I decided to create one in F#. This might be useful for anyone interested in the basics of implementing a functional programming language.

πŸ‘︎ 16
πŸ“°︎ r/fsharp
πŸ’¬︎
πŸ‘€︎ u/brianberns
πŸ“…︎ Jan 06 2021
🚨︎ report
Crafting Semantics 1: Lambda Calculus drs.is/post/crafting-sema…
πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/Sinistersnare
πŸ“…︎ Jan 06 2021
🚨︎ report
Ξ»vc: Variadic Auto-Curried Lambda Calculus pdarragh.github.io/variad…
πŸ‘︎ 65
πŸ’¬︎
πŸ‘€︎ u/bjzaba
πŸ“…︎ Nov 29 2020
🚨︎ report
Lambda Calculus Parser Issue

I am creating a lambda calculus parser for fun, and I'm running into some issues because I am not very experienced with Parsec. The code I wrote to parse expressions is below:

data Expression
  = Variable String
  | Abstraction Expression Expression
  | Application Expression Expression

parens :: Parser a -&gt; Parser a
parens = between (char '(') (char ')')

expression :: Parser Expression
expression = parens expression &lt;|&gt; abstraction &lt;|&gt; try application &lt;|&gt; variable

abstraction :: Parser Expression
abstraction =
  Abstraction
    &lt;$&gt; between (char '\\') (char '.') variable
    &lt;*&gt; expression

application :: Parser Expression
application =
  Application
    &lt;$&gt; expression
    &lt;*&gt; (space &gt;&gt; expression)

variable :: Parser Expression
variable = Variable &lt;$&gt; many1 alphaNum

There are several issues with this code:

  1. When it tries to parse an application, it runs into an infinite loop because it can never parse the first expression as a variable because it tries to parse an application first (I can explain this better if you need) If I put variable before application, it never parses an application because variable succeeds, so it doesn't consume the whole input.

  2. Many parentheses are needed because applications have no associativity (I think I'm using that word right)

Any help would be appreciated

πŸ‘︎ 6
πŸ“°︎ r/haskellquestions
πŸ’¬︎
πŸ‘€︎ u/Evthestrike
πŸ“…︎ Dec 09 2020
🚨︎ report
With respect to the understanding of the subject matter [lambda calculus]: your handle on it may not be as deep as you think. I’d suggest going deeper. reddit.com/r/programming/…
πŸ‘︎ 59
πŸ’¬︎
πŸ‘€︎ u/Beheddard
πŸ“…︎ Oct 19 2020
🚨︎ report
ELI5: Lambda-calculus (from which we get the term "a lambda") is a generalization of variable substitution, like that we perform in ordinary algebra reddit.com/r/scheme/comme…
πŸ‘︎ 31
πŸ’¬︎
πŸ‘€︎ u/ws-ilazki
πŸ“…︎ Oct 15 2020
🚨︎ report
Fun with Lambda Calculus stopa.io/post/263
πŸ‘︎ 28
πŸ“°︎ r/Clojure
πŸ’¬︎
πŸ‘€︎ u/stepanp
πŸ“…︎ Oct 18 2020
🚨︎ report
A basic question about lambda calculus

Wikipedia tells me that lambda calculus "is a universal model of computation that can be used to simulate any Turing machine."

Does that mean that a large software package (say, League of Legends) could be represented as a set of lambda expressions? (not that it would be practical to represent it that way)

πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/BrightBad6
πŸ“…︎ Oct 07 2020
🚨︎ report
Lambster, an online lambda calculus interpreter written in TypeScript

Hey everyone!

This is a project I worked on during the start of the summer, and after some rewrites of the front end I think it's in a good enough state to share with people. The interpreter is written in TypeScript and is my first major PL project, so if you have constructive criticism/advice I'd appreciate it. If you want to play around with the interpreter, it's available to try on lambster.dev or you can use the CLI by installing it via npm.

Please let me know what you think. Thanks!

Links:

Lambster: https://lambster.dev

Interpreter source: https://github.com/minchingtonak/lambster

Website source: https://github.com/minchingtonak/lambster.dev

πŸ‘︎ 65
πŸ’¬︎
πŸ‘€︎ u/Slarrty
πŸ“…︎ Aug 24 2020
🚨︎ report
Fun with Lambda Calculus stopa.io/post/263
πŸ‘︎ 18
πŸ’¬︎
πŸ‘€︎ u/stepanp
πŸ“…︎ Oct 18 2020
🚨︎ report
Software is just lambda calculus news.ycombinator.com/item…
πŸ‘︎ 42
πŸ’¬︎
πŸ‘€︎ u/cmqv
πŸ“…︎ Aug 24 2020
🚨︎ report
Programming in Lambda Calculus

I have written a little paper:

Programming in Lambda Calculus

The text looks at lambda calculus from a programmers point of view. Everyone with some programming experience should be able to read and understand the text.

Special effort is given on how to represent algebraic data types in lambda calculus.

I hope some of you might enjoy the paper.

πŸ‘︎ 223
πŸ“°︎ r/compsci
πŸ’¬︎
πŸ‘€︎ u/hbrandl
πŸ“…︎ Jun 26 2020
🚨︎ report
How to expand Simply-typed Lambda Calculus based on the Curry-Howard correspondence

Hi there.

I've been recently learning on the subject of the "Propositions as Types" paradigm which stems from the Curry-Howard correspondence, but can't seem to find online material with beginner-friendly explanations of the "next steps" following the correspondence of Simply-typed Lambda calculus and Propositional calculus (a.k.a. zeroth-order logic). Therefore, I would appreciate help with some questions and/or pointers to resources which may answer them.

From what I understand so far, adding types to Lambda calculus revokes its Turing-completeness as it becomes "strongly normalizing", meaning every well-typed program terminates through reduction to a normal form (question 1: proof?). It is then pretty straightforward to see that its type system corresponds to propositional calculus with implication (->). So far so good.

At this point begins the logic system rabbit-hole.

Further reading on different logics made me wonder, which logic is the correspondence to, exactly. It is clearly not classical logic as that seems to require call/cc or some other control mechanism and it seems like the answer is Intuitionistic propositional logic (question 2: which variant?). However, I don't see where are the other logic connectives or a clear listing of the intuitionistic axioms and their corresponding rules in simply-typed lambda calculus.

It seems natural to extend the Simply-typed Lambda calculus to a more powerful logic. For instance, adding a product type (*) gets us the corresponding of logical conjuction (AND). Logical disjunction (OR) then, seems to find its type-system correspondent in sum types (a.k.a disjoint unions). I then wonder (question 3) why have functional languages such as OCaml and Haskell chosen sum types (a.k.a Tagged unions) to represent these instead of the "more usual" non-disjoint union (e.g. TypeScript union types). I guess this could be related to the constructive necessity of proofs in the view of intuitionistic logic, but I'm

... keep reading on reddit ➑

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/Zkirmisher
πŸ“…︎ Nov 15 2020
🚨︎ report
What is the relationship between function as defined in set theory and abstraction in lambda calculus

In most literature explaining functional programming that I have read it is often stated that functions in Haskell are mathematical functions. That the notion of function as it is in Haskell is what exist in mathematics (this is done to differentiate from methods/procedures).

It is also stated that Haskell is an implementation of the lambda calculus.

I am not a mathematician but I am aware that functions in mathematics can be derived from set theory. Where a function is a mapping of values from a set called domain to another set called codomain. This then leads to 3 categories of functions: Surjective, Injective and Bijective

Also from the simplified explanation of Lambda calculus that I have read, it is stated that The lambda calculus has three basic components, or lambda terms:expressions, variables, and abstractions. Where abstractions is similar to the knowing of a function.

The question now is...is abstraction as defined in Lambda calculus one and the same as function as arrived via set theory. I am not sure as the few explanation of Lambda calculus I have read did not clarify this if this relationship exist or not. For example in materials I have read it is not explicitly stated if the abstraction in lambda calculus enforces the constraints that says x can map to one and only one y

Thoughts?

πŸ‘︎ 37
πŸ“°︎ r/haskell
πŸ’¬︎
πŸ‘€︎ u/finlaydotweber
πŸ“…︎ Aug 02 2020
🚨︎ report
How to implement basic lambda calculus operators in common lisp

Hello I'm a lisp beginner and just saw this talk on Lambda Calculus and I like how the basics are explained. The examples are written in JavaScript how would I write them in common lisp(especially the combinators)?From what I see its better to bind a lambda to a variable with defparameter or setq rather then use defun so they return a lambda.

I tried:

(defpackage lambda
  (:use :cl))
(in-package :lambda)

;; IDIOT - Ξ»a.a
(defparameter I (lambda (a) a))

;; MOCKINGBIRD - Ξ»ff.f
(defparameter M (lambda (f) (funcall f f)))

;; KESTREL - Ξ»ab.a (Doesn't work - just returns the last value)
(defparameter K (lambda (a b) a))

But this seems wrong and I'm already stuck on KESTREL as it just return the last value(as common lisp does). And KESTREL should be curried from what I gather and have a lambda that take a that invokes a lambda that takes b and returns a. Not sure how to do that.

I can now invoke I I and M I in without surrounding parens in the slime repl which if i understand the talk should have happened. (not sure of I and M being correctly implemented eighter)

How do write the js for F =&gt; a =&gt; b =&gt; a + b in common lisp?

Any help is greatly appreciated.

πŸ‘︎ 6
πŸ“°︎ r/learnlisp
πŸ’¬︎
πŸ‘€︎ u/MakeItEnd14
πŸ“…︎ Sep 25 2020
🚨︎ report
I wrote a compiler from the lambda calculus to co-CCCs

It is well known one can compile the lambda calculus to Cartesian closed categories.

Dually it is possible to interpret the lambda calculus in reverse and compile to co-CCCs.

I took an existing compiler I had made and flipped all the guts around and now I can compile from simple higher order abstract syntax https://github.com/sstewartgallus/prologish/blob/master/src/Hoas.hs to a simple categorical structure https://github.com/sstewartgallus/prologish/blob/master/src/Mal.hs .

Products become like sums and sums become like products and the result is truly horrendously confusing.

https://github.com/sstewartgallus/prologish/blob/master/app/Main.hs

I also wrote a simple interpreter https://github.com/sstewartgallus/prologish/blob/master/src/AsEval.hs that nterprets the coexponential type using continuations but finding a sensible computational interpretation is even more befuddling.

I think it should be possible to do some Prolog like stuff with this but so far I am just confused.

πŸ‘︎ 88
πŸ“°︎ r/haskell
πŸ’¬︎
πŸ‘€︎ u/Molossus-Spondee
πŸ“…︎ Sep 12 2020
🚨︎ report
reduction-restricted lambda calculus

The obvious choice of counting beta-reductions does not give a reasonable cost model for the lambda calculus due to the possibly arbitrarily large complexity of the body (RHS) of lambda applications. This leads me to consider a K-restricted version of the LC where applications have at most K occurrences of the argument on the RHS. Some questions naturally follow:

  1. Has this already been studied, perhaps with a different name?
  2. Can unrestricted LC programs be simply translated to say 2-restricted LC with minimal blow-op in length or overall complexity?
  3. Does this actually give a reasonable cost model or allow a more efficient evaluator?
  4. With the restriction on RHS complexity, does this make 2-r LC a combinator calculus with a finite number of combinators?

Thoughts or suggestions for further reading?

πŸ‘︎ 45
πŸ“°︎ r/compsci
πŸ’¬︎
πŸ‘€︎ u/LobYonder
πŸ“…︎ Sep 01 2020
🚨︎ report
An introduction to Lambda Calculus, explained through JavaScript willtaylor.blog/an-introd…
πŸ‘︎ 3
πŸ“°︎ r/Frontend
πŸ’¬︎
πŸ‘€︎ u/fagnerbrack
πŸ“…︎ Oct 26 2020
🚨︎ report
An introduction to Lambda Calculus, explained through JavaScript willtaylor.blog/an-introd…
πŸ‘︎ 2
πŸ“°︎ r/node
πŸ’¬︎
πŸ‘€︎ u/fagnerbrack
πŸ“…︎ Oct 22 2020
🚨︎ report
An introduction to Lambda Calculus, explained through JavaScript willtaylor.blog/an-introd…
πŸ‘︎ 2
πŸ“°︎ r/webdev
πŸ’¬︎
πŸ‘€︎ u/fagnerbrack
πŸ“…︎ Oct 17 2020
🚨︎ report
Simple type preserving compiler from lambda calculus to call by push value and continuation passing style

Hi

Don't know if I posted this as its own post lately but I'm feeling down and need encouragement.

I rewrote much of an existing compiler ( https://github.com/sstewartgallus/jsystemf ) in Java to Haskell after getting frustrated. I'll probably see if I can get this to work in Frege.

The current compiler is here.

https://github.com/sstewartgallus/hs-callbypushvalue/tree/master/src

The original idea was to get a lazy functional tail recursive language working on the JVM and to go really fast. I am very far from that goal but still have some interesting results anyway.

I chose System F as the basic IR to work from because it's pretty universal and I got stuck experimenting and figuring out my own language ideas. Basically everything can compile to System F anyway.

I compiled to call by push value because strictness analysis looked complicated and I needed something like A Normal Form anyway.

Call by push value just seems like a natural language to work in for dealing with thunks and a non termination side effect. was hoping to compile to JVM bytecode at runtime from a high level system F AST so with inlining at runtime I wouldn't need to do strictness analysis. The difficulties with making a raw Java compiler probably put a damper on that unless Frege will work well enough.

Next I compile to Call By Push Value with explicit Callcc like methods. Thunks can be reduced to a sort of double negation of a value in the type of stacks (the call by push value variant of continuations) at this point. I added a separate step instead of compiling directly to cps because cps constantly confused me throughout this project. The Callcc intermediate stage was easier to get right.

After this point I compile to continuation passing style (stack passing style actually but same thing.) I compile to continuation passing style because to emulate tail calls on the JVM I would need to use a similar construct anyway. I hoped if I worked directly in this context in the compiler the runtime could be a lot simpler.

I looked at various papers on how to implement tail calls performantly on the JVM and settled on a sort of explicit continuation approach.

Every function is passed in an explicit Continuation object that has a counter of the depth of the call stack. Every time we call a new continuation we increment and check a counter. If the counter is too large we save our state into the object and unwind the call stack. Unless the current state is a primitive no extra boxing/a

... keep reading on reddit ➑

πŸ‘︎ 21
πŸ“°︎ r/haskell
πŸ’¬︎
πŸ‘€︎ u/Molossus-Spondee
πŸ“…︎ Jul 06 2020
🚨︎ report
Lambda calculus self application confusion

So, self application is defined as M = Ξ»f. f f.

Now, let ADD = Ξ»x. Ξ»y. x + y

So,

M ADD = (Ξ»f. f f) ADD
= ADD ADD
= (Ξ»x. Ξ»y. x + y) ADD
= Ξ»y. ADD + y

As you can see, it has become nonsensical. How does self-application work with binary functions.

πŸ‘︎ 2
πŸ“°︎ r/learnmath
πŸ’¬︎
πŸ‘€︎ u/Mycroft2046
πŸ“…︎ Dec 07 2020
🚨︎ report
Lambda Calculus beta-reduction question

If we apply one beta-reduction with renaming to the expression

( πœ†x. (πœ†y. x) x ) y

we can get

( (πœ†w. y) y )

but shouldn't the answer be

( (πœ†w. x) y )

πŸ‘︎ 5
πŸ“°︎ r/haskell
πŸ’¬︎
πŸ‘€︎ u/zeinfree
πŸ“…︎ Jul 26 2020
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.