Package Exports
- abstract-algorithm
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (abstract-algorithm) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Lamping's Abstract Algorithm
A cleaned up adaptation of Lamping's abstract algorithm. This is the most beautiful algorithm I know. It evaluates functions optimally, in the sense there is no other way to evaluate them with an inferior number of parallel reduction steps. Not only that, it automatically exploits any inherent parallelizability of your program without any added effort. My implementation is sequential, but optimal for that case. It works by converting λ-calculus terms to a form of graph known as interaction combinators (but symmetric, and with infinite node colors), reducing the graph to normal form (by applying the rules below lazily) and then reading back to a λ-term.

Usage
Install
npm install abstract-algorithm
Use
const A = require("abstract-algorithm"); const term = "(s.z.(s (s z)) s.z.(s (s z)))"; // church-encoded exponentiation of 2^2 const normalForm = A.netToLam(A.reduceNet(A.lamToNet(term)));
Bla bla bla
Here is a crappy handwritten explanation of how it works. Very simple, isn't it? Here is a working example on how to use this lib to reduce untyped λ-terms. The entire reduction algorithm is implemented as a <150 LOC file operating on arrays (sharing-graph.js), and the conversor between λ-terms and graphs is about half of that (lambda-encoder). The only drawback is that this only works with stratifiable terms, which roughly means that the algorithm breaks when you have duplications inside of duplications. I personally blame λ-calculus for that defect, since it allows expressing very illogical / garbage terms. Good news is, well-written, sensible terms work, and, if it doesn't work, your λ-term is probably bad and you should feel bad.
Here is a non-inlined reducer (much prettier, great to understand the algorithm), and here is a poor attempt at making a WASM reducer (tip: it is not any faster, which is weird, because the C version is 10x faster than the JS one). Here is an attempt to flatten the graph-reduction algorithm to a linear automata, in such a way that it could, in theory, be implemented as a massivelly parallel ASIC where each 128-bit memory node is actually a CPU that performs a computation by looking at the state of its neighbors and updating its own based on a simple rule, in such a way that the memory converges to the normal form of the graph.