Package Exports
- partial.lenses
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (partial.lenses) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
[ ≡ | Tutorial | Reference | Examples | Background | GitHub | ▶ Try Lenses! ]
Partial Lenses
Lenses are basically a bidirectional composable abstraction for updating selected elements of immutable data structures that admits efficient implementation. This library provides a collection of partial isomorphisms, lenses, and traversals, collectively known as optics, for manipulating JSON and users can write new optics for manipulating non-JSON objects, such as Immutable.js collections. A partial lens can view optional data, insert new data, update existing data and remove existing data and can, for example, provide defaults and maintain required data structure parts. ▶ Try Lenses!
Contents
- Tutorial
- Reference
- Optics
- Traversals
- Operations on traversals
- Folds over traversals
L.collect(traversal, maybeData) ~> [...values]L.collectAs((maybeValue, index) => maybeValue, traversal, maybeData) ~> [...values]L.foldl((value, maybeValue, index) => value, value, traversal, maybeData) ~> valueL.foldr((value, maybeValue, index) => value, value, traversal, maybeData) ~> valueL.maximum(traversal, maybeData) ~> maybeValueL.minimum(traversal, maybeData) ~> maybeValueL.product(traversal, maybeData) ~> numberL.sum(traversal, maybeData) ~> number
- Creating new traversals
- Traversals and combinators
- Lenses
- Isomorphisms
- Examples
- Background
Tutorial
Let's work with the following sample JSON object:
const sampleTexts = {
contents: [{ language: "en", text: "Title" },
{ language: "sv", text: "Rubrik" }]
}First we import libraries
import * as L from "partial.lenses"
import * as R from "ramda"and compose a parameterized lens for accessing texts:
const textIn = language => L.compose(L.prop("contents"),
L.define([]),
L.normalize(R.sortBy(L.get("language"))),
L.find(R.whereEq({language})),
L.valueOr({language, text: ""}),
L.removable("text"),
L.prop("text"))Take a moment to read through the above definition line by line. Each part
either specifies a step in the path to select the desired element or a way in
which the data structure must be treated at that point. The purpose of
the L.prop(...) parts is probably obvious. The other parts we will
mention below.
Querying data
Thanks to the parameterized search
part, L.find(R.whereEq({language})), of the lens composition, we
can use it to query texts:
L.get(textIn("sv"), sampleTexts)
// 'Rubrik'L.get(textIn("en"), sampleTexts)
// 'Title'Partial lenses can deal with missing data. If we use the partial lens to query a text that does not exist, we get the default:
L.get(textIn("fi"), sampleTexts)
// ''We get this value, rather than undefined, thanks to
the L.valueOr({language, text: ""}) part of our lens
composition, which ensures that we get the specified value rather than null or
undefined. We get the default even if we query from undefined:
L.get(textIn("fi"), undefined)
// ''With partial lenses, undefined is the equivalent of empty or non-existent.
Updating data
As with ordinary lenses, we can use the same lens to update texts:
L.set(textIn("en"), "The title", sampleTexts)
// { contents: [ { language: 'en', text: 'The title' },
// { language: 'sv', text: 'Rubrik' } ] }Inserting data
The same partial lens also allows us to insert new texts:
L.set(textIn("fi"), "Otsikko", sampleTexts)
// { contents: [ { language: 'en', text: 'Title' },
// { language: 'fi', text: 'Otsikko' },
// { language: 'sv', text: 'Rubrik' } ] }Note the position into which the new text was inserted. The array of texts is
kept sorted thanks to
the L.normalize(R.sortBy(L.get("language"))) part of our lens.
Removing data
Finally, we can use the same partial lens to remove texts:
L.set(textIn("sv"), undefined, sampleTexts)
// { contents: [ { language: 'en', text: 'Title' } ] }Note that a single text is actually a part of an object. The key to having the
whole object vanish, rather than just the text property, is
the L.removable("text") part of our lens composition. It
makes it so that when the text property is set to undefined, the result will
be undefined rather than merely an object without the text property.
If we remove all of the texts, we get the required value:
R.pipe(L.set(textIn("sv"), undefined),
L.set(textIn("en"), undefined))(sampleTexts)
// { contents: [] }The contents property is not removed thanks to the L.define([])
part of our lens composition. It makes it so that when reading or writing
through the lens, undefined becomes the given value.
Exercises
Take out one (or
more)
L.define(...),
L.normalize(...), L.valueOr(...)
or L.removable(...) part(s) from the lens composition and try
to predict what happens when you rerun the examples with the modified lens
composition. Verify your reasoning by actually rerunning the examples.
Shorthands
For clarity, the previous code snippets avoided some of the shorthands that this library supports. In particular,
L.compose(...)can be abbreviated as an array[...],L.prop(propName)can be abbreviated aspropName, andL.set(l, undefined, s)can be abbreviated asL.remove(l, s).
Systematic decomposition
It is also typical to compose lenses out of short paths following the schema of the JSON data being manipulated. Recall the lens from the start of the example:
L.compose(L.prop("contents"),
L.define([]),
L.normalize(R.sortBy(L.get("language"))),
L.find(R.whereEq({language})),
L.valueOr({language, text: ""}),
L.removable("text"),
L.prop("text"))Following the structure or schema of the JSON, we could break this into three separate lenses:
- a lens for accessing the contents of a data object,
- a parameterized lens for querying a content object from contents, and
- a lens for accessing the text of a content object.
Furthermore, we could organize the lenses to reflect the structure of the JSON model:
const Content = {
text: [L.removable("text"), "text"]
}
const Contents = {
contentIn: language => [L.find(R.whereEq({language})),
L.valueOr({language, text: ""})]
}
const Texts = {
contents: ["contents",
L.define([]),
L.normalize(R.sortBy(L.get("language")))],
textIn: language => [Texts.contents,
Contents.contentIn(language),
Content.text]
}We can now say:
L.get(Texts.textIn("sv"), sampleTexts)
// 'Rubrik'This style of organizing lenses is overkill for our toy example. In a more
realistic case the sampleTexts object would contain many more properties.
Also, rather than composing a lens, like Texts.textIn above, to access a leaf
property from the root of our object, we might actually compose lenses
incrementally as we inspect the model structure.
Manipulating multiple items
So far we have used a lens to manipulate individual items. This library also supports traversals that compose with lenses and can target multiple items. Continuing on the tutorial example, let's define a traversal that targets all the texts:
const texts = [Texts.contents,
L.elems,
Content.text]What makes the above a traversal is the L.elems part. Once a
traversal is composed with a lens, the whole results is a traversal. The other
parts of the above composition should already be familiar from previous
examples. Note how we were able to use the previously defined Texts.contents
and Content.text lenses.
Now, we can use the above traversal to collect all the texts:
L.collect(texts, sampleTexts)
// [ 'Title', 'Rubrik' ]More generally, we can map and fold over texts. For example, we can compute the length of the longest text:
const Max = {empty: () => 0, concat: Math.max}
L.concatAs(R.length, Max, texts, sampleTexts)
// 6Of course, we can also modify texts. For example, we could uppercase all the titles:
L.modify(texts, R.toUpper, sampleTexts)
// { contents: [ { language: 'en', text: 'TITLE' },
// { language: 'sv', text: 'RUBRIK' } ] }We can also manipulate texts selectively. For example, we could remove all the texts that are longer than 5 characters:
L.remove([texts, L.when(t => t.length > 5)],
sampleTexts)
// { contents: [ { language: 'en', text: 'Title' } ] }Reference
The combinators provided by this library are available as named imports. Typically one just imports the library as:
import * as L from "partial.lenses"Optics
The abstractions, traversals, lenses, and isomorphisms, provided by this library are collectively known as optics. Traversals can target any number of elements. Lenses are a restriction of traversals that target a single element. Isomorphisms are a restriction of lenses with an inverse.
Some optics libraries provide many more abstractions, such as "optionals", "prisms" and "folds", to name a few, forming a DAG. Aside from being conceptually important, many of those abstractions are not only useful but required in a statically typed setting where data structures have precise constraints on their shapes, so to speak, and operations on data structures must respect those constraints at all times.
In a dynamically typed language like JavaScript, the shapes of run-time objects
are naturally malleable. Nothing immediately breaks if a new object is
created as a copy of another object by adding or removing a property, for
example. We can exploit this to our advantage by considering all optics as
partial. A partial optic, as manifested in this library, may be intended to
operate on data structures of a specific type, such as arrays or objects, but
also accepts the possibility that it may be given any valid JSON object or
undefined as input. When the input does not match the expectation of a
partial lens, the input is treated as being undefined. This allows specific
partial optics, such as the simple L.prop lens, to be used in a
wider range of situations than corresponding total optics.
Operations on optics
≡ ▶ L.modify(optic, (maybeValue, index) => maybeValue, maybeData) ~> maybeData
L.modify allows one to map over the focused element
L.modify(["elems", 0, "x"], R.inc, {elems: [{x: 1, y: 2}, {x: 3, y: 4}]})
// { elems: [ { x: 2, y: 2 }, { x: 3, y: 4 } ] }or, when using a traversal, elements
L.modify(["elems", L.elems, "x"],
R.dec,
{elems: [{x: 1, y: 2}, {x: 3, y: 4}]})
// { elems: [ { x: 0, y: 2 }, { x: 2, y: 4 } ] }of a data structure.
≡ ▶ L.remove(optic, maybeData) ~> maybeData
L.remove allows one to remove the focused element
L.remove([0, "x"], [{x: 1}, {x: 2}, {x: 3}])
// [ { x: 2 }, { x: 3 } ]
or, when using a traversal, elements
L.remove([L.elems, "x", L.when(x => x > 1)], [{x: 1}, {x: 2, y: 1}, {x: 3}])
// [ { x: 1 }, { y: 1 } ]from a data structure.
Note that L.remove(optic, maybeData) is equivalent
to L.set(lens, undefined, maybeData). With partial lenses, setting
to undefined typically has the effect of removing the focused element.
≡ ▶ L.set(optic, maybeValue, maybeData) ~> maybeData
L.set allows one to replace the focused element
L.set(["a", 0, "x"], 11, {id: "z"})
// {a: [{x: 11}], id: 'z'}or, when using a traversal, elements
L.set([L.elems, "x", L.when(x => x > 1)], -1, [{x: 1}, {x: 2, y: 1}, {x: 3}])
// [ { x: 1 }, { x: -1, y: 1 }, { x: -1 } ]of a data structure.
Note that L.set(lens, maybeValue, maybeData) is equivalent
to L.modify(lens, R.always(maybeValue), maybeData).
Nesting
≡ ▶ L.compose(...optics) ~> optic or [...optics]
L.compose performs composition of optics. The following equations
characterize composition:
L.compose() = L.identity
L.compose(l) = l
L.modify(L.compose(o, ...os)) = R.compose(L.modify(o), ...os.map(L.modify))
L.get(L.compose(o, ...os)) = R.pipe(L.get(o), ...os.map(L.get))Furthermore, in this library, an array of optics [...optics] is treated as a
composition L.compose(...optics). Using the array notation, the above
equations can be written as:
[] = L.identity
[l] = l
L.modify([o, ...os]) = R.compose(L.modify(o), ...os.map(L.modify))
L.get([o, ...os]) = R.pipe(L.get(o), ...os.map(L.get))For example:
L.set(["a", 1], "a", {a: ["b", "c"]})
// { a: [ 'b', 'a' ] }L.get(["a", 1], {a: ["b", "c"]})
// 'c'Note that R.compose is not the same as
L.compose.
Querying
≡ ▶ L.chain((value, index) => optic, optic) ~> optic
L.chain(toOptic, optic) is equivalent to
L.compose(optic, L.choose((maybeValue, index) =>
maybeValue === undefined
? L.zero
: toOptic(maybeValue, index)))Note that with the L.just, L.chain, L.choice
and L.zero combinators, one can consider optics as subsuming the
maybe monad.
≡ ▶ L.choice(...lenses) ~> optic
L.choice returns a partial optic that acts like the first of the given lenses
whose view is not undefined on the given data structure. When the views of
all of the given lenses are undefined, the returned lens acts
like L.zero, which is the identity element of L.choice.
For example:
L.modify([L.elems, L.choice("a", "d")], R.inc, [{R: 1}, {a: 1}, {d: 2}])
// [ { R: 1 }, { a: 2 }, { d: 3 } ] ≡ ▶ L.choose((maybeValue, index) => optic) ~> optic
L.choose creates an optic whose operation is determined by the given function
that maps the underlying view, which can be undefined, to an optic. In other
words, the L.choose combinator allows an optic to be constructed after
examining the data structure being manipulated.
For example, given:
const majorAxis =
L.choose(({x, y} = {}) => Math.abs(x) < Math.abs(y) ? "y" : "x")we get:
L.get(majorAxis, {x: 1, y: 2})
// 2L.get(majorAxis, {x: -3, y: 1})
// -3L.modify(majorAxis, R.negate, {x: 2, y: -3})
// { x: 2, y: 3 } ≡ ▶ L.optional ~> optic
L.optional is an optic over an optional element. When used as a traversal,
and the focus is undefined, the traversal is empty. When used as a lens, and
the focus is undefined, the lens will be read-only.
As an example, consider the difference between:
L.set([L.elems, "x"], 3, [{x: 1}, {y: 2}])
// [ { x: 3 }, { y: 2, x: 3 } ]and:
L.set([L.elems, "x", L.optional], 3, [{x: 1}, {y: 2}])
// [ { x: 3 }, { y: 2 } ]Note that L.optional is equivalent
to L.when(x => x !== undefined).
≡ ▶ L.when((maybeValue, index) => testable) ~> optic
L.when allows one to selectively skip elements within a traversal or to
selectively turn a lens into a read-only lens whose view is undefined.
For example:
L.modify([L.elems, L.when(x => x > 0)], R.negate, [0, -1, 2, -3, 4])
// [ 0, -1, -2, -3, -4 ]Note that L.when(p) is equivalent
to L.choose((x, i) => p(x, i) ? L.identity : L.zero).
≡ ▶ L.zero ~> optic
L.zero is the identity element of L.choice
and L.chain. As a traversal, L.zero is a traversal of no
elements and as a lens, i.e. when used with L.get, L.zero is a
read-only lens whose view is always undefined.
For example:
L.collect([L.elems,
L.choose(x => (R.is(Array, x) ? L.elems :
R.is(Object, x) ? "x" :
L.zero))],
[1, {x: 2}, [3,4]])
// [ 2, 3, 4 ]Recursing
≡ ▶ L.lazy(optic => optic) ~> optic
L.lazy can be used to construct optics lazily. The function given to L.lazy
is passed a forwarding proxy to its return value and can also make forward
references to other optics and possibly construct a recursive optic.
Note that when using L.lazy to construct a recursive optic, it will only work
in a meaningful way when the recursive uses are at nested positions meaning that
the recursive use is precomposed with some other optic.
For example, here is a traversal that targets all the primitive elements in a data structure of nested arrays and objects:
const flatten = [L.optional, L.lazy(rec => {
const elems = [L.elems, rec]
const values = [L.values, rec]
return L.choose(x => (x instanceof Array ? elems :
x instanceof Object ? values :
L.identity))
})]Note that the above creates a cyclic representation of the traversal.
Now, for example:
L.collect(flatten, [[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// [ 1, 2, 3, 4, 5, 6 ]L.modify(flatten, x => x+1, [[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// [ [ [ 2 ], 3 ], { y: 4 }, [ { l: 5, r: [ 6 ] }, { x: 7 } ] ]L.remove([flatten, L.when(x => 3 <= x && x <= 4)],
[[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// [ [ [ 1 ], 2 ], [ { r: [ 5 ] }, { x: 6 } ] ]Debugging
≡ ▶ L.log(...labels) ~> optic
L.log(...labels) is an identity optic that
outputs
console.log
messages with the given labels
(or
format in Node.js)
when data flows in either direction, get or set, through the lens.
For example:
L.get(["x", L.log()], {x: 10})
// get 10
// 10L.set(["x", L.log("x")], "11", {x: 10})
// x get 10
// x set 11
// { x: '11' }L.set(["x", L.log("%s x: %j")], "11", {x: 10})
// get x: 10
// set x: "11"
// { x: '11' }Internals
≡ ▶ L.toFunction(optic) ~> optic
L.toFunction converts a given optic, which can be a string,
an integer, an array, or a function to a function.
This can be useful for implementing new combinators and operations that cannot
otherwise be implemented using the combinators provided by this library.
For isomorphisms and lenses, the returned function will have the signature
(Functor c, (Maybe a, Index) -> c b, Maybe s, Index) -> c tand for traversals the signature will be
(Applicative c, (Maybe a, Index) -> c b, Maybe s, Index) -> c tNote that the above signatures are written using the "tupled" parameter notation
(...) -> ... to denote that the functions are not curried.
The
Functor and
Applicative arguments
are expected to conform to
their
Static Land
specifications.
Note that, in conjunction with partial optics, it may be advantageous to have
the Functor and Applicative algebras to allow for partiality. With
traversals it is also possible, for example, to simply post compose optics
with L.optional to eliminate undefined elements.
Traversals
A traversal operates over a collection of non-overlapping focuses that are visited only once and can, for example, be collected, folded, modified, set and removed.
Operations on traversals
≡ ▶ L.concat(monoid, traversal, maybeData) ~> traversal
L.concat({empty, concat}, t, s) performs a fold, using the given concat and
empty operations, over the elements focused on by the given traversal or lens
t from the given data structure s. The concat operation and the constant
returned by empty() should form
a
monoid over
the values focused on by t.
For example:
const Sum = {empty: () => 0, concat: (x, y) => x + y}
L.concat(Sum, L.elems, [1, 2, 3])
// 6Note that L.concat is staged so that after given the first argument,
L.concat(m), a computation step is performed.
≡ ▶ L.concatAs((maybeValue, index) => value, monoid, traversal, maybeData) ~> traversal
L.concatAs(xMi2r, {empty, concat}, t, s) performs a map, using given function
xMi2r, and fold, using the given concat and empty operations, over the
elements focused on by the given traversal or lens t from the given data
structure s. The concat operation and the constant returned by empty()
should form
a
monoid over
the values returned by xMi2r.
For example:
L.concatAs(x => x, Sum, L.elems, [1, 2, 3])
// 6Note that L.concatAs is staged so that after given the first two arguments,
L.concatAs(f, m), a computation step is performed.
≡ ▶ L.merge(monoid, traversal, maybeData) ~> traversal
L.merge(monoid, traversal, maybeData) ~> traversalWARNING: L.merge is obsolete, just use L.concat.
L.merge({empty, concat}, t, s) performs a fold, using the given concat and
empty operations, over the elements focused on by the given traversal or lens
t from the given data structure s. The concat operation and the constant
returned by empty() should form
a
commutative monoid over
the values focused on by t.
For example:
L.merge(Sum, L.elems, [1, 2, 3])
// 6Note that L.merge is staged so that after given the first argument,
L.merge(m), a computation step is performed.
See also: L.concat.
≡ ▶ L.mergeAs((maybeValue, index) => value, monoid, traversal, maybeData) ~> traversal
L.mergeAs((maybeValue, index) => value, monoid, traversal, maybeData) ~> traversalWARNING: L.mergeAs is obsolete, just use L.concatAs.
L.mergeAs(xMi2r, {empty, concat}, t, s) performs a map, using given function
xMi2r, and fold, using the given concat and empty operations, over the
elements focused on by the given traversal or lens t from the given data
structure s. The concat operation and the constant returned by empty()
should form
a
commutative monoid over
the values returned by xMi2r.
For example:
L.mergeAs(x => x, Sum, L.elems, [1, 2, 3])
// 6Note that L.mergeAs is staged so that after given the first two arguments,
L.mergeAs(f, m), a computation step is performed.
See also: L.concatAs.
Folds over traversals
≡ ▶ L.collect(traversal, maybeData) ~> [...values]
L.collect returns an array of the defined elements focused on by the given
traversal or lens from a data structure.
For example:
L.collect(["xs", L.elems, "x"], {xs: [{x: 1}, {x: 2}]})
// [ 1, 2 ]Note that L.collect is equivalent
to L.collectAs(R.identity).
≡ ▶ L.collectAs((maybeValue, index) => maybeValue, traversal, maybeData) ~> [...values]
L.collectAs returns an array of the elements focused on by the given traversal
or lens from a data structure and mapped by the given function to a defined
value. Given a lens, there will be 0 or 1 elements in the returned array. Note
that a partial lens always targets an element, but L.collectAs implicitly
skips elements that are mapped to undefined by the given function. Given a
traversal, there can be any number of elements in the array returned by
L.collectAs.
For example:
L.collectAs(R.negate, ["xs", L.elems, "x"], {xs: [{x: 1}, {x: 2}]})
// [ -1, -2 ]L.collectAs(toMaybe, traversal, maybeData) is equivalent
to
L.concatAs(R.pipe(toMaybe, toCollect), Collect, traversal, maybeData) where
Collect and toCollect are defined as follows:
const Collect = {empty: R.always([]), concat: R.concat}
const toCollect = x => x !== undefined ? [x] : []So:
L.concatAs(R.pipe(R.negate, toCollect),
Collect,
["xs", L.elems, "x"],
{xs: [{x: 1}, {x: 2}]})
// [ -1, -2 ]The internal implementation of L.collectAs is optimized and faster than the
above naïve implementation.
≡ ▶ L.foldl((value, maybeValue, index) => value, value, traversal, maybeData) ~> value
L.foldl performs a fold from left over the elements focused on by the given
traversal.
For example:
L.foldl((x, y) => x + y, 0, L.elems, [1,2,3])
// 6 ≡ ▶ L.foldr((value, maybeValue, index) => value, value, traversal, maybeData) ~> value
L.foldr performs a fold from right over the elements focused on by the given
traversal.
For example:
L.foldr((x, y) => x * y, 1, L.elems, [1,2,3])
// 6 ≡ ▶ L.maximum(traversal, maybeData) ~> maybeValue
L.maximum computes a maximum, according to the > operator, of the optional
elements targeted by the traversal.
For example:
L.maximum(L.elems, [1,2,3])
// 3 ≡ ▶ L.minimum(traversal, maybeData) ~> maybeValue
L.minimum computes a minimum, according to the < operator, of the optional
elements targeted by the traversal.
For example:
L.minimum(L.elems, [1,2,3])
// 1 ≡ ▶ L.product(traversal, maybeData) ~> number
L.product computes the product of the optional numbers targeted by the
traversal.
For example:
L.product(L.elems, [1,2,3])
// 6 ≡ ▶ L.sum(traversal, maybeData) ~> number
L.sum computes the sum of the optional numbers targeted by the traversal.
For example:
L.sum(L.elems, [1,2,3])
// 6Creating new traversals
≡ ▶ L.branch({prop: traversal, ...props}) ~> traversal
L.branch creates a new traversal from a given template object that specifies
how the new traversal should visit the properties of an object.
For example:
L.collect(L.branch({first: L.elems, second: L.identity}),
{first: ["x"], second: "y"})
// [ 'x', 'y' ]Note that you can also compose L.branch with other optics. For example, you
can compose with L.pick to create a traversal over specific
elements of an array:
L.modify([L.pick({x: 0, z: 2}),
L.branch({x: L.identity, z: L.identity})],
R.negate,
[1, 2, 3])
// [ -1, 2, -3 ]See the BST traversal section for a more meaningful example.
Traversals and combinators
≡ ▶ L.elems ~> traversal
L.elems is a traversal over the elements of an array-like
object. When written through, L.elems always produces an Array.
For example:
L.modify(["xs", L.elems, "x"], R.inc, {xs: [{x: 1}, {x: 2}]})
// { xs: [ { x: 2 }, { x: 3 } ] }Just like with other optics operating on array-like objects, when
manipulating non-Array objects, L.rewrite can be used to
convert the result to the desired type, if necessary:
L.modify([L.rewrite(xs => Int8Array.from(xs)), L.elems],
R.inc,
Int8Array.from([-1,4,0,2,4]))
// Int8Array [ 0, 5, 1, 3, 5 ] ≡ ▶ L.values ~> traversal
L.values is a traversal over the values of an instanceof Object. When
written through, L.values always produces an Object.
For example:
L.modify(L.values, R.negate, {a: 1, b: 2, c: 3})
// { a: -1, b: -2, c: -3 }When manipulating objects with a non-Object constructor
function XYZ(x,y,z) {
this.x = x
this.y = y
this.z = z
}
XYZ.prototype.norm = function () {
return (this.x * this.x +
this.y * this.y +
this.z * this.z)
}L.rewrite can be used to convert the result to the desired type,
if necessary:
const objectTo = R.curry((C, o) => Object.assign(Object.create(C.prototype), o))
L.modify([L.rewrite(objectTo(XYZ)), L.values],
R.negate,
new XYZ(1,2,3))
// XYZ { x: -1, y: -2, z: -3 }Lenses
Lenses always have a single focus which can be viewed directly.
Operations on lenses
≡ ▶ L.get(lens, maybeData) ~> maybeValue
L.get returns the focused element from a data structure.
For example:
L.get("y", {x: 112, y: 101})
// 101Note that L.get does not work on traversals.
Creating new lenses
≡ ▶ L.lens((maybeData, index) => maybeValue, (maybeValue, maybeData, index) => maybeData) ~> lens
L.lens creates a new primitive lens. The first parameter is the getter and
the second parameter is the setter. The setter takes two parameters: the
first is the value written and the second is the data structure to write into.
One should think twice before introducing a new primitive lens—most of the
combinators in this library have been introduced to reduce the need to write new
primitive lenses. With that said, there are still valid reasons to create new
primitive lenses. For example, here is a lens that we've used in production,
written with the help of Moment.js, to bidirectionally
convert a pair of start and end times to a duration:
const timesAsDuration = L.lens(
({start, end} = {}) => {
if (undefined === start)
return undefined
if (undefined === end)
return "Infinity"
return moment.duration(moment(end).diff(moment(start))).toJSON()
},
(duration, {start = moment().toJSON()} = {}) => {
if (undefined === duration || "Infinity" === duration) {
return {start}
} else {
return {
start,
end: moment(start).add(moment.duration(duration)).toJSON()
}
}
}
)Now, for example:
L.get(timesAsDuration,
{start: "2016-12-07T09:39:02.451Z",
end: moment("2016-12-07T09:39:02.451Z").add(10, "hours").toISOString()})
// "PT10H"L.set(timesAsDuration,
"PT10H",
{start: "2016-12-07T09:39:02.451Z",
end: "2016-12-07T09:39:02.451Z"})
// { end: '2016-12-07T19:39:02.451Z',
// start: '2016-12-07T09:39:02.451Z' }When composed with L.pick, to flexibly pick the start and end
times, the above can be adapted to work in a wide variety of cases. However,
the above lens will never be added to this library, because it would require
adding dependency to Moment.js.
See the Interfacing with Immutable.js section for another
example of using L.lens.
Computing derived props
≡ ▶ L.augment({prop: object => value, ...props}) ~> lens
L.augment is given a template of functions to compute new properties. When
not viewing or setting a defined object, the result is undefined. When
viewing a defined object, the object is extended with the computed properties.
When set with a defined object, the extended properties are removed.
For example:
L.modify(L.augment({y: r => r.x + 1}),
r => ({x: r.x + r.y, y: 2, z: r.x - r.y}),
{x: 1})
// { x: 3, z: -1 }Enforcing invariants
≡ ▶ L.defaults(valueIn) ~> lens
L.defaults is used to specify a default context or value for an element in
case it is missing. When set with the default value, the effect is to remove
the element. This can be useful for both making partial lenses with propagating
removal and for avoiding having to check for and provide default values
elsewhere.
For example:
L.get(["items", L.defaults([])], {})
// []L.get(["items", L.defaults([])], {items: [1, 2, 3]})
// [ 1, 2, 3 ]L.set(["items", L.defaults([])], [], {items: [1, 2, 3]})
// undefinedNote that L.defaults(valueIn) is equivalent
to L.replace(undefined, valueIn).
≡ ▶ L.define(value) ~> lens
L.define is used to specify a value to act as both the default value and the
required value for an element.
L.get(["x", L.define(null)], {y: 10})
// nullL.set(["x", L.define(null)], undefined, {y: 10})
// { y: 10, x: null }Note that L.define(value) is equivalent to [L.required(value), L.defaults(value)].
≡ ▶ L.normalize((value, index) => maybeValue) ~> lens
L.normalize maps the value with same given transform when viewed and set and
implicitly maps undefined to undefined.
One use case for normalize is to make it easy to determine whether, after a
change, the data has actually changed. By keeping the data normalized, a
simple R.equals comparison will do.
Note that the difference between L.normalize and L.rewrite is
that L.normalize applies the transform in both directions
while L.rewrite only applies the transform when writing.
≡ ▶ L.required(valueOut) ~> lens
L.required is used to specify that an element is not to be removed; in case it
is removed, the given value will be substituted instead.
For example:
L.remove(["items", 0], {items: [1]})
// undefinedL.remove([L.required({}), "items", 0], {items: [1]})
// {}L.remove(["items", L.required([]), 0], {items: [1]})
// { items: [] }Note that L.required(valueOut) is equivalent
to L.replace(valueOut, undefined).
≡ ▶ L.rewrite((valueOut, index) => maybeValueOut) ~> lens
L.rewrite maps the value with the given transform when set and implicitly maps
undefined to undefined. One use case for rewrite is to re-establish data
structure invariants after changes.
Note that the difference between L.normalize and L.rewrite
is that L.normalize applies the transform in both directions
while L.rewrite only applies the transform when writing.
See the BST as a lens section for a meaningful example.
Lensing array-like objects
Objects that have a non-negative integer length and strings, which are not
considered Object instances in JavaScript, are considered array-like objects
by partial optics.
When writing a defined value through an optic that operates on array-like
objects, the result is always an Array. For example:
L.set(1, "a", "LoLa")
// [ 'L', 'a', 'L', 'a' ]It may seem like the result should be of the same type as the object being
manipulated, but that is problematic, because the focus of a partial optic is
always optional. Instead, when manipulating strings or array-like non-Array
objects, L.rewrite can be used to convert the result to the
desired type, if necessary. For example:
L.set([L.rewrite(R.join("")), 1], "a", "LoLa")
// 'LaLa' ≡ ▶ L.append ~> lens
L.append is a write-only lens that can be used to append values to
an array-like object. The view of L.append is always
undefined.
For example:
L.get(L.append, ["x"])
// undefinedL.set(L.append, "x", undefined)
// [ 'x' ]L.set(L.append, "x", ["z", "y"])
// [ 'z', 'y', 'x' ]Note that L.append is equivalent to L.index(i) with the index
i set to the length of the focused array or 0 in case the focus is not a
defined array.
≡ ▶ L.filter((value, index) => testable) ~> lens
L.filter operates on array-like objects. When not viewing an
array-like object, the result is undefined. When viewing an array-like
object, only elements matching the given predicate will be returned. When set,
the resulting array will be formed by concatenating the elements of the set
array-like object and the elements of the complement of the filtered focus. If
the resulting array would be empty, the whole result will be undefined.
For example:
L.set(L.filter(x => x <= "2"), "abcd", "3141592")
// [ 'a', 'b', 'c', 'd', '3', '4', '5', '9' ]NOTE: If you are merely modifying a data structure, and don't need to limit
yourself to lenses, consider using the L.elems traversal composed
with L.when.
An alternative design for filter could implement a smarter algorithm to combine
arrays when set. For example, an algorithm based
on edit distance could be used to
maintain relative order of elements. While this would not be difficult to
implement, it doesn't seem to make sense, because in most cases use
of L.normalize or L.rewrite would be
preferable. Also, the L.elems traversal composed
with L.when will retain order of elements.
≡ ▶ L.find((value, index) => testable) ~> lens
L.find operates on array-like objects
like L.index, but the index to be viewed is determined by finding
the first element from the focus that matches the given predicate. When no
matching element is found the effect is same as with L.append.
L.remove(L.find(x => x <= 2), [3,1,4,1,5,9,2])
// [ 3, 4, 1, 5, 9, 2 ] ≡ ▶ L.findWith(...lenses) ~> lens
L.findWith(...lenses) chooses an index from an array-like
object through which the given lens, [...lenses], focuses on a
defined item and then returns a lens that focuses on that item.
For example:
L.get(L.findWith("x"), [{z: 6}, {x: 9}, {y: 6}])
// 9L.set(L.findWith("x"), 3, [{z: 6}, {x: 9}, {y: 6}])
// [ { z: 6 }, { x: 3 }, { y: 6 } ] ≡ ▶ L.index(elemIndex) ~> lens or elemIndex
L.index(elemIndex) or just elemIndex focuses on the element at specified
index of an array-like object.
- When not viewing an index with a defined element, the result is
undefined. - When setting to
undefined, the element is removed from the resulting array, shifting all higher indices down by one. If the result would be an empty array, the whole result will beundefined. - When setting a defined value to an index that is higher than the length of the
array-like object, the missing elements will be filled with
undefined.
For example:
L.set(2, "z", ["x", "y", "c"])
// [ 'x', 'y', 'z' ]NOTE: There is a gotcha related to removing elements from array-like
objects. Namely, when the last element is removed, the result is undefined
rather than an empty array. This is by design, because this allows the removal
to propagate upwards. It is not uncommon, however, to have cases where removing
the last element from an array-like object must not remove the array itself.
Consider the following examples without L.required([]):
L.remove(0, ["a", "b"])
// [ 'b' ]L.remove(0, ["b"])
// undefinedL.remove(["elems", 0], {elems: ["b"], some: "thing"})
// { some: 'thing' }Then consider the same examples with L.required([]):
L.remove([L.required([]), 0], ["a", "b"])
// [ 'b' ]L.remove([L.required([]), 0], ["b"])
// []L.remove(["elems", L.required([]), 0], {elems: ["b"], some: "thing"})
// { elems: [], some: 'thing' }There is a related gotcha with L.required. Consider the
following example:
L.remove(L.required([]), [])
// []L.get(L.required([]), [])
// undefinedIn other words, L.required works in both directions. Thanks to
the handling of undefined within partial lenses, this is often not a problem,
but sometimes you need the "default" value both ways. In that case you can
use L.define.
≡ ▶ L.slice(maybeBegin, maybeEnd) ~> lens
L.slice focuses on a specified range of elements of
an array-like object. The range is determined like with the
standard
slice method
of arrays, basically
- non-negative values are relative to the beginning of the array-like object,
- negative values are relative to the end of the array-like object, and
undefinedgives the defaults: 0 for the begin and length for the end.
For example:
L.get(L.slice(1, -1), [1,2,3,4])
// [ 2, 3 ]L.set(L.slice(-2, undefined), [0], [1,2,3,4])
// [ 1, 2, 0 ]Lensing objects
≡ ▶ L.prop(propName) ~> lens or propName
L.prop(propName) or just propName focuses on the specified object property.
- When not viewing a defined object property, the result is
undefined. - When writing to a property, the result is always an
Object. - When setting property to
undefined, the property is removed from the result. If the result would be an empty object, the whole result will beundefined.
When setting or removing properties, the order of keys is preserved.
For example:
L.get("y", {x: 1, y: 2, z: 3})
// 2L.set("y", -2, {x: 1, y: 2, z: 3})
// { x: 1, y: -2, z: 3 }When manipulating objects whose constructor is not
Object, L.rewrite can be used to convert the result to the
desired type, if necessary:
L.set([L.rewrite(objectTo(XYZ)), "z"], 3, new XYZ(3,1,4))
// XYZ { x: 3, y: 1, z: 3 } ≡ ▶ L.props(...propNames) ~> lens
L.props focuses on a subset of properties of an object, allowing one to treat
the subset of properties as a unit. The view of L.props is undefined when
none of the properties is defined. Otherwise the view is an object containing a
subset of the properties. Setting through L.props updates the whole subset of
properties, which means that any missing properties are removed if they did
exists previously. When set, any extra properties are ignored.
L.set(L.props("x", "y"), {x: 4}, {x: 1, y: 2, z: 3})
// { x: 4, z: 3 }Note that L.props(k1, ..., kN) is equivalent to L.pick({[k1]: k1, ..., [kN]: kN}).
≡ ▶ L.removable(...propNames) ~> lens
L.removable creates a lens that, when written through, replaces the whole
result with undefined if none of the given properties is defined in the
written object. L.removable is designed for making removal propagate through
objects.
Contrast the following examples:
L.remove("x", {x: 1, y: 2})
// { y: 2 }L.remove([L.removable("x"), "x"], {x: 1, y: 2})
// undefinedNote that L.removable(...ps) is roughly equivalent
to
rewrite(y => y instanceof Object && !R.any(p => R.has(p, y), ps) ? undefined : y).
Also note that, in a composition, L.removable is likely preceded
by L.valueOr (or L.defaults) like in
the tutorial example. In such a pair, the preceding lens gives a
default value when reading through the lens, allowing one to use such a lens to
insert new objects. The following lens then specifies that removing the then
focused property (or properties) should remove the whole object. In cases where
the shape of the incoming object is know, L.defaults can replace
such a pair.
Providing defaults
≡ ▶ L.valueOr(valueOut) ~> lens
L.valueOr is an asymmetric lens used to specify a default value in case the
focus is undefined or null. When set, L.valueOr behaves like the identity
lens.
For example:
L.get(L.valueOr(0), null)
// 0L.set(L.valueOr(0), 0, 1)
// 0L.remove(L.valueOr(0), 1)
// undefinedAdapting to data
≡ ▶ L.orElse(backupLens, primaryLens) ~> lens
L.orElse(backupLens, primaryLens) acts like primaryLens when its view is not
undefined and otherwise like backupLens. You can use L.orElse on its own
with R.reduceRight
(and R.reduce) to create an associative
choice over lenses or use L.orElse to specify a default or backup lens
for L.choice, for example.
Read-only mapping
≡ ▶ L.just(maybeValue) ~> lens
L.just returns a read-only lens whose view is always the given value. In
other words, for all x, y and z:
L.get(L.just(z), x) = z
L.set(L.just(z), y, x) = xNote that L.just(x) is equivalent to L.to(R.always(x)).
L.just can be seen as the unit function of the monad formed
with L.chain.
≡ ▶ L.to((maybeValue, index) => maybeValue) ~> lens
L.to creates a read-only lens whose view is determined by the given function.
For example:
L.get(["x", L.to(x => x + 1)], {x: 1})
// 2L.set(["x", L.to(x => x + 1)], 3, {x: 1})
// { x: 1 }Transforming data
≡ ▶ L.pick({prop: lens, ...props}) ~> lens
L.pick creates a lens out of the given object template of lenses and allows
one to pick apart a data structure and then put it back together. When viewed,
an object is created, whose properties are obtained by viewing through the
lenses of the template. When set with an object, the properties of the object
are set to the context via the lenses of the template. undefined is treated
as the equivalent of empty or non-existent in both directions.
For example, let's say we need to deal with data and schema in need of some semantic restructuring:
const sampleFlat = {px: 1, py: 2, vx: 1.0, vy: 0.0}We can use L.pick to create lenses to pick apart the data and put it back
together into a more meaningful structure:
const asVec = prefix => L.pick({x: prefix + "x", y: prefix + "y"})
const sanitize = L.pick({pos: asVec("p"), vel: asVec("v")})We now have a better structured view of the data:
L.get(sanitize, sampleFlat)
// { pos: { x: 1, y: 2 }, vel: { x: 1, y: 0 } }That works in both directions:
L.modify([sanitize, "pos", "x"], R.add(5), sampleFlat)
// { px: 6, py: 2, vx: 1, vy: 0 }NOTE: In order for a lens created with L.pick to work in a predictable
manner, the given lenses must operate on independent parts of the data
structure. As a trivial example, in L.pick({x: "same", y: "same"}) both of
the resulting object properties, x and y, address the same property of the
underlying object, so writing through the lens will give unpredictable results.
Note that, when set, L.pick simply ignores any properties that the given
template doesn't mention. Also note that the underlying data structure need not
be an object.
≡ ▶ L.replace(maybeValueIn, maybeValueOut) ~> lens
L.replace(maybeValueIn, maybeValueOut), when viewed, replaces the value
maybeValueIn with maybeValueOut and vice versa when set.
For example:
L.get(L.replace(1, 2), 1)
// 2L.set(L.replace(1, 2), 2, 0)
// 1The main use case for replace is to handle optional and required properties
and elements. In most cases, rather than using replace, you will make
selective use of defaults, required
and define.
Isomorphisms
The focus of an isomorphism is the whole data structure rather than a part of
it. Furthermore, an isomorphism can be inverted. More
specifically, a lens, iso, is an isomorphism iff the following equations hold
for all x and y in the domain and range, respectively, of the lens:
L.set(iso, L.get(iso, x), undefined) = x
L.get(iso, L.set(iso, y, undefined)) = yThe above equations mean that x => L.get(iso, x) and y => L.set(iso, y, undefined) are inverses of each other.
Operations on isomorphisms
≡ ▶ L.getInverse(isomorphism, maybeData) ~> maybeData
L.getInverse views through an isomorphism in the inverse direction.
For example:
const numeric = f => x => typeof x === "number" ? f(x) : undefined
const offBy1 = L.iso(numeric(R.inc), numeric(R.dec))
L.getInverse(offBy1, 1)
// 0Note that L.getInverse(iso, data) is equivalent
to L.set(iso, data, undefined).
Also note that, while L.getInverse makes most sense when used with an
isomorphism, it is valid to use L.getInverse with partial lenses in general.
Doing so essentially constructs a minimal data structure that contains the given
value. For example:
L.getInverse("meaning", 42)
// { meaning: 42 }Creating new isomorphisms
≡ ▶ L.iso(maybeData => maybeValue, maybeValue => maybeData) ~> isomorphism
L.iso creates a new primitive isomorphism.
For example:
const negate = L.iso(numeric(R.negate), numeric(R.negate))
L.get([negate, L.inverse(negate)], 112)
// 112Isomorphisms and combinators
≡ ▶ L.identity ~> isomorphism
L.identity is the identity element of lens composition and also the identity
isomorphism. The following equations characterize L.identity:
L.get(L.identity, x) = x
L.modify(L.identity, f, x) = f(x)
L.compose(L.identity, l) = l
L.compose(l, L.identity) = l ≡ ▶ L.inverse(isomorphism) ~> isomorphism
L.inverse returns the inverse of the given isomorphism. Note that this
operation only makes sense on isomorphisms.
For example:
L.get(L.inverse(offBy1), 1)
// 0Examples
Note that if you are new to lenses, then you probably want to start with the tutorial.
An array of ids as boolean flags
A case that we have run into multiple times is where we have an array of constant strings such as
const sampleFlags = ["id-19", "id-76"]that we wish to manipulate as if it was a collection of boolean flags. Here is a parameterized lens that does just that:
const flag = id => [L.normalize(R.sortBy(R.identity)),
L.find(R.equals(id)),
L.replace(undefined, false),
L.replace(id, true)]Now we can treat individual constants as boolean flags:
L.get(flag("id-69"), sampleFlags)
// falseL.get(flag("id-76"), sampleFlags)
// trueIn both directions:
L.set(flag("id-69"), true, sampleFlags)
// ['id-19', 'id-69', 'id-76']L.set(flag("id-76"), false, sampleFlags)
// ['id-19']BST as a lens
Binary search trees might initially seem to be outside the scope of definable
lenses. However, given basic BST operations, one could easily wrap them as a
primitive partial lens. But could we leverage lens combinators to build a BST
lens more compositionally? We can. The L.choose combinator
allows for dynamic construction of lenses based on examining the data structure
being manipulated. Inside L.choose we can write the ordinary BST
logic to pick the correct branch based on the key in the currently examined node
and the key that we are looking for. So, here is our first attempt at a BST
lens:
const searchAttempt = key => L.lazy(rec => {
const smaller = ["smaller", rec]
const greater = ["greater", rec]
const found = L.defaults({key})
return L.choose(n => {
if (!n || key === n.key)
return found
return key < n.key ? smaller : greater
})
})
const valueOfAttempt = key => [searchAttempt(key), "value"]Note that we also make use of the L.lazy combinator to create a
recursive lens with a cyclic representation.
This actually works to a degree. We can use the valueOfAttempt lens
constructor to build a binary tree. Here is a little helper to build a tree
from pairs:
const fromPairs =
R.reduce((t, [k, v]) => L.set(valueOfAttempt(k), v, t), undefined)Now:
const sampleBST = fromPairs([[3, "g"], [2, "a"], [1, "m"], [4, "i"], [5, "c"]])
sampleBST
// { key: 3,
// value: 'g',
// smaller: { key: 2, value: 'a', smaller: { key: 1, value: 'm' } },
// greater: { key: 4, value: 'i', greater: { key: 5, value: 'c' } } }However, the above searchAttempt lens constructor does not maintain the BST
structure when values are being removed:
L.remove(valueOfAttempt(3), sampleBST)
// { key: 3,
// smaller: { key: 2, value: 'a', smaller: { key: 1, value: 'm' } },
// greater: { key: 4, value: 'i', greater: { key: 5, value: 'c' } } }How do we fix this? We could check and transform the data structure to a BST
after changes. The L.rewrite combinator can be used for that
purpose. Here is a naïve rewrite to fix a tree after value removal:
const naiveBST = L.rewrite(n => {
if (undefined !== n.value) return n
const s = n.smaller, g = n.greater
if (!s) return g
if (!g) return s
return L.set(search(s.key), s, g)
})Here is a working search lens and a valueOf lens constructor:
const search = key => L.lazy(rec => {
const smaller = ["smaller", rec]
const greater = ["greater", rec]
const found = L.defaults({key})
return [naiveBST, L.choose(n => {
if (!n || key === n.key)
return found
return key < n.key ? smaller : greater
})]
})
const valueOf = key => [search(key), "value"]Now we can also remove values from a binary tree:
L.remove(valueOf(3), sampleBST)
// { key: 4,
// value: 'i',
// greater: { key: 5, value: 'c' },
// smaller: { key: 2, value: 'a', smaller: { key: 1, value: 'm' } } }As an exercise, you could improve the rewrite to better maintain balance.
Perhaps you might even enhance it to maintain a balance condition such
as AVL
or Red-Black. Another
worthy exercise would be to make it so that the empty binary tree is null
rather than undefined.
BST traversal
What about traversals over BSTs? We can use
the L.branch combinator to define an in-order traversal over the
values of a BST:
const values = L.lazy(rec => [
L.optional,
naiveBST,
L.branch({smaller: rec,
value: L.identity,
greater: rec})])Given a binary tree sampleBST we can now manipulate it as a whole. For
example:
const Concat = {empty: () => "", concat: R.concat}
L.concatAs(R.toUpper, Concat, values, sampleBST)
// 'MAGIC'L.modify(values, R.toUpper, sampleBST)
// { key: 3,
// value: 'G',
// smaller: { key: 2, value: 'A', smaller: { key: 1, value: 'M' } },
// greater: { key: 4, value: 'I', greater: { key: 5, value: 'C' } } }L.remove([values, L.when(x => x > "e")], sampleBST)
// { key: 5, value: 'c', smaller: { key: 2, value: 'a' } }Interfacing with Immutable.js
Immutable.js is a popular library providing immutable data structures. As argued in Lenses with Immutable.js it can be useful to wrap such libraries as optics.
When interfacing external libraries with partial lenses one does need to consider whether and how to support partiality. Partial lenses allow one to insert new and remove existing elements rather than just view and update existing elements.
List indexing
Here is a primitive partial lens for
indexing List written
using L.lens:
const getList = i => xs => Immutable.List.isList(xs) ? xs.get(i) : undefined
const setList = i => (x, xs) => {
if (!Immutable.List.isList(xs))
xs = Immutable.List()
if (x !== undefined)
return xs.set(i, x)
xs = xs.delete(i)
return xs.size ? xs : undefined
}
const idxList = i => L.lens(getList(i), setList(i))Note how the above uses isList to check the input. When viewing, in case the
input is not a List, the proper result is undefined. When updating the
proper way to handle a non-List is to treat it as empty and also to replace a
resulting empty list with undefined. Also, when updating, we treat
undefined as a request to delete rather than set.
We can now view existing elements:
const sampleList = Immutable.List(["a", "l", "i", "s", "t"])
L.get(idxList(2), sampleList)
// 'i'Update existing elements:
L.modify(idxList(1), R.toUpper, sampleList)
// List [ "a", "L", "i", "s", "t" ]Remove existing elements:
L.remove(idxList(0), sampleList)
// List [ "l", "i", "s", "t" ]And removing the last element propagates removal:
L.remove(["elems", idxList(0)],
{elems: Immutable.List(["x"]), look: "No elems!"})
// { look: 'No elems!' }We can also create lists from non-lists:
L.set(idxList(0), "x", undefined)
// List [ "x" ]And we can also append new elements:
L.set(idxList(5), "!", sampleList)
// List [ "a", "l", "i", "s", "t", "!" ]Consider what happens when the index given to idxList points further beyond
the last element. Both the L.index lens and the above lens add
undefined values, which is not ideal with partial lenses, because of the
special treatment of undefined. In practise, however, it is not typical to
set elements except to append just after the last element.
Interfacing traversals
Fortunately we do not need Immutable.js data structures to provide a compatible
partial
traverse function
to support traversals, because it is also possible to implement
traversals simply by providing suitable isomorphisms between Immutable.js data
structures and JSON. Here is a partial isomorphism between
List and arrays:
const fromList = xs => Immutable.List.isList(xs) ? xs.toArray() : undefined
const toList = xs => R.is(Array, xs) && xs.length ? Immutable.List(xs) : undefined
const isoList = L.iso(fromList, toList)So, now we can compose a traversal over List as:
const seqList = [isoList, L.elems]And all the usual operations work as one would expect, for example:
L.remove([seqList, L.when(c => c < "i")], sampleList)
// List [ 'l', 's', 't' ]And:
L.concatAs(R.toUpper,
Concat,
[seqList, L.when(c => c <= "i")],
sampleList)
// 'AI'Background
Motivation
Consider the following REPL session using Ramda:
R.set(R.lensPath(["x", "y"]), 1, {})
// { x: { y: 1 } }R.set(R.compose(R.lensProp("x"), R.lensProp("y")), 1, {})
// TypeError: Cannot read property 'y' of undefinedR.view(R.lensPath(["x", "y"]), {})
// undefinedR.view(R.compose(R.lensProp("x"), R.lensProp("y")), {})
// TypeError: Cannot read property 'y' of undefinedR.set(R.lensPath(["x", "y"]), undefined, {x: {y: 1}})
// { x: { y: undefined } }R.set(R.compose(R.lensProp("x"), R.lensProp("y")), undefined, {x: {y: 1}})
// { x: { y: undefined } }One might assume that R.lensPath([p0, ...ps]) is equivalent to
R.compose(R.lensProp(p0), ...ps.map(R.lensProp)), but that is not the case.
With partial lenses you can robustly compose a path lens from prop
lenses L.compose(L.prop(p0), ...ps.map(L.prop)) or just use the
shorthand notation [p0, ...ps]. In JavaScript, missing (and
mismatching) data can be mapped to undefined, which is what partial lenses
also do, because undefined is not a valid JSON value.
When a part of a data structure is missing, an attempt to view it returns
undefined. When a part is missing, setting it to a defined value inserts the
new part. Setting an existing part to undefined removes it.
Design choices
There are several lens and optics libraries for JavaScript. In this section I'd like to very briefly elaborate on a number design choices made during the course of developing this library.
Partiality
Making all optics partial allows optics to not only view and update existing elements, but also to insert, replace (as in replace with data of different type) and remove elements and to do so in a seamless and efficient way. In a library based on total lenses, one needs to e.g. explicitly compose lenses with prisms to deal with partiality. This not only makes the optic compositions more complex, but can also have a significant negative effect on performance.
The downside of implicit partiality is the potential to create incorrect optics that signal errors later than when using total optics.
Focus on JSON
JSON is the data-interchange format of choice today. By being able to effectively and efficiently manipulate JSON data structures directly, one can avoid using special internal representations of data and make things simpler (e.g. no need to convert from JSON to efficient immutable collections and back).
Use of undefined
undefined is a natural choice in JavaScript, especially when dealing with
JSON, to represent nothingness. Some libraries use null, but that is arguably
a poor choice, because null is a valid JSON value. Some libraries implement
special Maybe types, but the benefits do not seem worth the trouble. First of
all, undefined already exists in JavaScript and is not a valid JSON value.
Inventing a new value to represent nothingness doesn't seem to add much. OTOH,
wrapping values with Just objects introduces a significant performance
overhead due to extra allocations. Operations with optics do not otherwise
necessarily require large numbers of allocations and can be made highly
efficient.
Not having an explicit Just object means that dealing with values such as
Just Nothing requires special consideration.
Allowing strings and integers as optics
Aside from the brevity, allowing strings and non-negative integers to be directly used as optics allows one to avoid allocating closures for such optics. This can provide significant time and, more importantly, space savings in applications that create large numbers of lenses to address elements in data structures.
The downside of allowing such special values as optics is that the internal implementation needs to be careful to deal with them at any point a user given value needs to be interpreted as an optic.
Treating an array of optics as a composition of optics
Aside from the brevity, treating an array of optics as a composition allows the
library to be optimized to deal with simple paths highly efficiently and
eliminate the need for separate primitives
like assocPath
and dissocPath for performance reasons.
Client code can also manipulate such simple paths as data.
Applicatives
One interesting consequence of partiality is that it becomes possible to invert isomorphisms without explicitly making it possible to extract the forward and backward functions from an isomorphism. A simple internal implementation based on functors and applicatives seems to be expressive enough for a wide variety of operations.
L.branch
By providing combinators for creating new traversals, lenses and isomorphisms,
client code need not depend on the internal implementation of optics. The
current version of this library exposes the internal implementation
via L.toFunction, but it would not be unreasonable to not
provide such an operation. Only very few applications need to know the internal
representation of optics.
Indexing
Indexing in partial lenses is unnested, very simple and based on the indices and keys of the underlying data structures. When indexing was added, it essentially introduced no performance degradation, but since then a few operations have been added that do require extra allocations to support indexing. It is also possible to compose optics so as to create nested indices or paths, but currently no combinator is directly provided for that.
Static Land
The algebraic structures used in partial lenses follow the Static Land specification rather than the Fantasy Land specification. Static Land does not require wrapping values in objects, which translates to a significant performance advantage throughout the library, because fewer allocations are required.
Performance
Concern for performance has been a part of the work on partial lenses for some time. The basic principles can be summarized in order of importance:
- Minimize overheads
- Micro-optimize for common cases
- Avoid stack overflows
- Avoid quadratic algorithms
- Avoid optimizations that require large amounts of code
- Run benchmarks continuously to detect performance regressions
Benchmarks
Here are a few benchmark results on partial lenses (as L version 9.0.2) and
some roughly equivalent operations using Ramda (as R
version 0.23.0), Ramda Lens (as P
version 0.1.1), and Flunc Optics (as O
version 0.0.2). As always with benchmarks, you should take these numbers with a
pinch of salt and preferably try and measure your actual use cases!
7,429,258/s 1.00x R.reduceRight(add, 0, xs100)
464,228/s 16.00x L.foldr(add, 0, L.elems, xs100)
4,168/s 1782.37x O.Fold.foldrOf(O.Traversal.traversed, addC, 0, xs100)
11,245/s 1.00x R.reduceRight(add, 0, xs100000)
56/s 200.96x L.foldr(add, 0, L.elems, xs100000)
0/s Infinityx O.Fold.foldrOf(O.Traversal.traversed, addC, 0, xs100000) -- STACK OVERFLOW
678,969/s 1.00x L.foldl(add, 0, L.elems, xs100)
211,793/s 3.21x R.reduce(add, 0, xs100)
3,002/s 226.17x O.Fold.foldlOf(O.Traversal.traversed, addC, 0, xs100)
4,064,819/s 1.00x L.sum(L.elems, xs100)
541,416/s 7.51x L.merge(Sum, L.elems, xs100)
127,340/s 31.92x R.sum(xs100)
23,422/s 173.55x P.sumOf(P.traversed, xs100)
4,298/s 945.74x O.Fold.sumOf(O.Traversal.traversed, xs100)
574,523/s 1.00x L.maximum(L.elems, xs100)
3,362/s 170.86x O.Fold.maximumOf(O.Traversal.traversed, xs100)
151,447/s 1.00x L.sum([L.elems, L.elems, L.elems], xsss100)
147,450/s 1.03x L.merge(Sum, [L.elems, L.elems, L.elems], xsss100)
4,238/s 35.73x P.sumOf(R.compose(P.traversed, P.traversed, P.traversed), xsss100)
877/s 172.61x O.Fold.sumOf(R.compose(O.Traversal.traversed, O.Traversal.traversed, O.Traversal.traversed), xsss100)
255,796/s 1.00x L.collect(L.elems, xs100)
3,517/s 72.73x O.Fold.toListOf(O.Traversal.traversed, xs100)
115,728/s 1.00x L.collect([L.elems, L.elems, L.elems], xsss100)
9,067/s 12.76x R.chain(R.chain(R.identity), xsss100)
804/s 143.92x O.Fold.toListOf(R.compose(O.Traversal.traversed, O.Traversal.traversed, O.Traversal.traversed), xsss100)
66,523/s 1.00x R.flatten(xsss100)
37,087/s 1.79x L.collect(flatten, xsss100)
18,463,205/s 1.00x L.modify(L.elems, inc, xs)
1,881,362/s 9.81x R.map(inc, xs)
426,892/s 43.25x P.over(P.traversed, inc, xs)
404,660/s 45.63x O.Setter.over(O.Traversal.traversed, inc, xs)
425,113/s 1.00x L.modify(L.elems, inc, xs1000)
119,449/s 3.56x R.map(inc, xs1000)
403/s 1055.55x O.Setter.over(O.Traversal.traversed, inc, xs1000) -- QUADRATIC
366/s 1160.07x P.over(P.traversed, inc, xs1000) -- QUADRATIC
157,507/s 1.00x L.modify([L.elems, L.elems, L.elems], inc, xsss100)
9,806/s 16.06x R.map(R.map(R.map(inc)), xsss100)
3,514/s 44.82x P.over(R.compose(P.traversed, P.traversed, P.traversed), inc, xsss100)
2,935/s 53.66x O.Setter.over(R.compose(O.Traversal.traversed, O.Traversal.traversed, O.Traversal.traversed), inc, xsss100)
32,101,122/s 1.00x L.get(1, xs)
3,946,374/s 8.13x R.nth(1, xs)
1,587,369/s 20.22x R.view(l_1, xs)
23,329,807/s 1.00x L.set(1, 0, xs)
6,990,165/s 3.34x R.update(1, 0, xs)
982,638/s 23.74x R.set(l_1, 0, xs)
28,303,132/s 1.00x L.get("y", xyz)
24,002,161/s 1.18x R.prop("y", xyz)
2,450,501/s 11.55x R.view(l_y, xyz)
12,342,395/s 1.00x L.set("y", 0, xyz)
7,479,499/s 1.65x R.assoc("y", 0, xyz)
1,293,679/s 9.54x R.set(l_y, 0, xyz)
14,561,179/s 1.00x L.get([0,"x",0,"y"], axay)
14,189,439/s 1.03x R.path([0,"x",0,"y"], axay)
2,311,151/s 6.30x R.view(l_0x0y, axay)
485,852/s 29.97x R.view(l_0_x_0_y, axay)
4,615,467/s 1.00x L.set([0,"x",0,"y"], 0, axay)
833,456/s 5.54x R.assocPath([0,"x",0,"y"], 0, axay)
523,766/s 8.81x R.set(l_0x0y, 0, axay)
325,620/s 14.17x R.set(l_0_x_0_y, 0, axay)
4,371,655/s 1.00x L.modify([0,"x",0,"y"], inc, axay)
545,705/s 8.01x R.over(l_0x0y, inc, axay)
336,918/s 12.98x R.over(l_0_x_0_y, inc, axay)
24,363,305/s 1.00x L.remove(1, xs)
2,953,233/s 8.25x R.remove(1, 1, xs)
13,321,942/s 1.00x L.remove("y", xyz)
2,654,914/s 5.02x R.dissoc("y", xyz)
16,234,629/s 1.00x L.get(["x","y","z"], xyzn)
14,246,092/s 1.14x R.path(["x","y","z"], xyzn)
2,313,952/s 7.02x R.view(l_xyz, xyzn)
789,871/s 20.55x R.view(l_x_y_z, xyzn)
166,313/s 97.61x O.Getter.view(o_x_y_z, xyzn)
5,593,457/s 1.00x L.set(["x","y","z"], 0, xyzn)
1,411,586/s 3.96x R.assocPath(["x","y","z"], 0, xyzn)
714,821/s 7.82x R.set(l_xyz, 0, xyzn)
522,355/s 10.71x R.set(l_x_y_z, 0, xyzn)
218,509/s 25.60x O.Setter.set(o_x_y_z, 0, xyzn)
4,079,906/s 1.00x L.remove(50, xs100)
1,817,392/s 2.24x R.remove(50, 1, xs100)
4,260,008/s 1.00x L.set(50, 2, xs100)
1,697,027/s 2.51x R.update(50, 2, xs100)
687,213/s 6.20x R.set(l_50, 2, xs100)Various operations on partial lenses have been optimized for common cases, but there is definitely a lot of room for improvement. The goal is to make partial lenses fast enough that performance isn't the reason why you might not want to use them.
See bench.js for details.
Lenses all the way
As said in the first sentence of this document, lenses are convenient for performing updates on individual elements of immutable data structures. Having abilities such as nesting, adapting, recursing and restructuring using lenses makes the notion of an individual element quite flexible and, even further, traversals make it possible to selectively target zero or more elements of non-trivial data structures in a single operation. It can be tempting to try to do everything with lenses, but that will likely only lead to misery. It is important to understand that lenses are just one of many functional abstractions for working with data structures and sometimes other approaches can lead to simpler or easier solutions. Zippers, for example, are, in some ways, less principled and can implement queries and transforms that are outside the scope of lenses and traversals.
One type of use case which we've ran into multiple times and falls out of the sweet spot of lenses is performing uniform transforms over data structures. For example, we've run into the following use cases:
- Eliminate all references to an object with a particular id.
- Transform all instances of certain objects over many paths.
- Filter out extra fields from objects of varying shapes and paths.
One approach to making such whole data structure spanning updates is to use a simple bottom-up transform. Here is a simple implementation for JSON based on ideas from the Uniplate library:
const descend = (w2w, w) => R.is(Object, w) ? R.map(w2w, w) : w
const substUp = (h2h, w) => descend(h2h, descend(w => substUp(h2h, w), w))
const transform = (w2w, w) => w2w(substUp(w2w, w))transform(w2w, w) basically just performs a single-pass bottom-up transform
using the given function w2w over the given data structure w. Suppose we
are given the following data:
const sampleBloated = {
just: "some",
extra: "crap",
that: [
"we",
{want: "to",
filter: ["out"],
including: {the: "following",
extra: true,
fields: 1}}]
}We can now remove the extra fields like this:
transform(R.ifElse(R.allPass([R.is(Object), R.complement(R.is(Array))]),
L.remove(L.props("extra", "fields")),
R.identity),
sampleBloated)
// { just: 'some',
// that: [ 'we', { want: 'to',
// filter: ['out'],
// including: {the: 'following'} } ] }Related work
Lenses are an old concept and there are dozens of academic papers on lenses and dozens of lens libraries for various languages. Here are just a few links:
- Polymorphic Update with van Laarhoven Lenses
- A clear picture of lens laws
- ramda/ramda-lens
- ekmett/lens
- julien-truffaut/Monocle
- xyncro/aether
- Flunc Optics
Feel free to suggest more links!