JSPM

trans-render

0.0.57
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 705
  • Score
    100M100P100Q116948F
  • License MIT

Instantiate an HTML Template

Package Exports

  • trans-render
  • trans-render/append.js
  • trans-render/init.js
  • trans-render/interpolate.js
  • trans-render/repeatInit.js
  • trans-render/update.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (trans-render) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

trans-render

Published on webcomponents.org

trans-render provides an alternative way of instantiating a template. It draws inspiration from the (least) popular features of xslt. Like xslt, trans-render performs transforms on elements by matching tests on elements. Whereas xslt uses xpath for its tests, trans-render uses css path tests via the element.matches() and element.querySelector() methods.

XSLT can take pure XML with no formatting instructions as its input. Generally speaking, the XML that XSLT acts on isn't a bunch of semantically meaningless div tags, but rather a nice semantic document, whose intrinsic structure is enough to go on, in order to formulate a "transform" that doesn't feel like a hack.

Likewise, with the advent of custom elements, the template markup will tend to be much more semantic, like XML. trans-render tries to rely as much as possible on this intrinisic semantic nature of the template markup, to give enough clues on how to fill in the needed "potholes" like textContent and property setting. But trans-render is completely extensible, so it can certainly accommodate custom markup (like string interpolation, or common binding attributes) by using additional, optional helper libraries.

This leaves the template markup quite pristine, but it does mean that the separation between the template and the binding instructions will tend to require looking in two places, rather than one. And if the template document structure changes, separate adjustments may be needed to make the binding rules in sync. Much like how separate style rules often need adjusting when the document structure changes.

Advantages

By keeping the binding separate, the same template can thus be used to bind with different object structures.

Providing the binding transform in JS form inside the init function signature has the advantage that one can benefit from TypeScript typing of Custom and Native DOM elements with no additional IDE support.

Another advantage of separating the binding like this, is that one can insert comments, console.log's and/or breakpoints, in order to walk through the binding process.

For more musings on the question of what is this good for, please see the rambling section below.

Workflow

trans-render provides helper functions for cloning a template, and then walking through the DOM, applying rules in document order. Note that the document can grow, as processing takes place (due, for example, to cloning sub templates). It's critical, therefore, that the processing occur in a logical order, and that order is down the document tree. That way it is fine to append nodes before continuing processing.

Drilling down to children

For each matching element, after modifying the node, you can instruct the processor which node(s) to consider next.

Most of the time, especially during initial development, you won't need / want to be so precise about where to go next. Generally, the pattern, as we will see, is just to define transform rules that match the HTML Template document structure pretty closely.

So, in the example we will see below, this notation:

const Transform = {
    details: {
        summary: x => model.summaryText
    }
};

means "if a node has tag name "details", then continue processing the next siblings of details, but also, find the first descendent of the node that has tag name "summary", and set its textContent property to model.summaryText."

If most of the template is static, but there's a deeply nested element that needs modifying, it is possible to drill straight down to that element by specifying a "Select" string value, which invokes querySelector. But beware: there's no going back to previous elements once that's done. If your template is dense with dynamic pockets, you will more likely want to navigate to the first child by setting Select = '*'.

So the syntax shown above is equivalent to:

const Transform = {
    details: {
        Select: 'summary',
        Transform: {
            summary: x => model.summaryText
        }
    }
};

In this case, the details property is a "NextStep" JS Object.

Clearly, the first example is easier, but you need to adopt the second way if you want to fine tune the next processing steps.

Matching next siblings

We most likely will also want to check the next siblings down for matches. Previously, in order to do this, you had to make sure "matchNextSibling" was passed back for every match. But that proved cumbersome. The current implementation checks for matches on the next sibling(s) by default. You can halt going any further by specifying "SkipSibs" in the "NextStep" object, something to strongly consider when looking for optimization opportunities.

It is deeply unfortunate that the DOM Query Api doesn't provide a convenience function for finding the next sibling that matches a query, similar to querySelector. Just saying. But some support for "cutting to the chase" laterally is also provided, via the "NextMatch" property in the NextStep object.

At this point, only a synchronous workflow is provided.

Syntax Example:

<template id="sourceTemplate">
    <details>
        ...
        <summary></summary>
        ...
    </details>
</template>
<div id="target"></div>
<script type="module">
    import { init } from '../init.js';
    const model = {
        summaryText: 'hello'
    }
    const Transform = {
        details: {
            summary: x => model.summaryText
        }
    };
    init(sourceTemplate, { Transform }, target);
</script>

Produces

<div id="target">
    <details>
        ...
        <summary>hello</summary>
        ...
    </details>
</div>

"target" is the HTML element we are populating. The transform matches can return a string, which will be used to set the textContent of the target. Or the transform can do its own manipulations on the target element, and then return a "NextStep" object specifying where to go next, or it can return a new Transform, which will get applied the first child by default.

Note the unusual property name casing, in the JavaScript arena for the NextStep object: Transform, Select, SkipSibs, etc. As we will see, this pattern is to allow the interpreter to distinguish between css matches for a nested Transform, vs a "NextStep" JS object.

Use Case 1: Applying the DRY principle to (post) punk rock lyrics

Example 1a (only viewable at webcomponents.org )

Demonstrates including sub templates.

Note the transform rule above (if viewed from webcomponents.org):

Transform: {
    '*': x  => ({
        Select: '*'
    }),

"*" is a match for all css elements. What this is saying is "for any element regardless of css-matching characteristics, continue processing its first child (Select => querySelector). This, combined with the default setting to match all the next siblings means that, for a "sparse" template with very few pockets of dynamic data, you will be doing a lot more processing than needed, as every single HTMLElement node will be checked for a match. But for initial, pre-optimization work, this transform rule can be a convenient way to get things done more quickly.

Example 1b (only viewable at webcomponents.org )

Demonstrates use of update, rudimentary interpolation, recursive select.

Reapplying (some) of the transform

Often, we want to reapply a transform, after something changes -- typically the source data.

The ability to do this is illustrated in the previous example. Critical syntax shown below:

<script type="module">
    import { init } from '../init.js';
    import { interpolate } from '../interpolate.js';
    import {update} from '../update.js';
    const ctx = init(Main, {
        model:{
            Day1: 'Monday', Day2: 'Tuesday', Day3: 'Wednesday', Day4: 'Thursday', Day5: 'Friday',
            Day6: 'Saturday', Day7: 'Sunday',
        },
        interpolate: interpolate,
        $: id => window[id],
    }, target);
    changeDays.addEventListener('click', e=>{
        ctx.model = {
            Day1: 'måndag', Day2: 'tisdag', Day3: 'onsdag', Day4: 'torsdag', Day5: 'fredag',
            Day6: 'lördag', Day7: 'söndag',
        }
        update(ctx, target);
    })
</script>

Loop support (NB: Not yet optimized)

The next big use case for this library is using it in conjunction with a virtual scroller. As far as I can see, the performance of this library should work quite well in that scenario.

However, no self respecting rendering library would be complete without some internal support for repeating lists. This library is no exception. While the performance of the initial list is likely to be acceptable, no effort has yet been made to utilize state of the art tricks to make list updates keep the number of DOM changes at a minimum.

Anyway the syntax is shown below. What's notable is a sub template is cloned repeatedly, then populated using the simple init / update methods.

    <div>
        <template id="itemTemplate">
            <li></li>
        </template>
        <template id="list">
            <ul id="container"></ul>
            <button id="addItems">Add items</button>
            <button id="removeItems">Remove items</button>
        </template>
        <div id="target"></div>

        <script type="module">
            import { init } from '../init.js';
            import { repeatInit } from '../repeatInit.js';
            import {repeatUpdate} from '../repeatUpdate.js';
            import {update} from '../update.js';
            const options = {matchNext: true};
            const ctx = init(list, {
                Transform: {
                    ul: ({ target, ctx }) => {
                        if (!ctx.update) {
                            repeatInit(10, itemTemplate, target);
                        }
                        return ({
                            li: ({ idx }) => 'Hello ' + idx,
                        });
                    }
                }
            }, target, options);
            addItems.addEventListener('click', e => {
                repeatUpdate(15, itemTemplate, container);
                update(ctx, target, options);
            });
            removeItems.addEventListener('click', e =>{
                repeatUpdate(5, null,  container);
            })
        </script>
    </div>

Ramblings From the Department of Faulty Analogies

When defining an HTML based user interface, the question arises whether styles should be inlined in the markup or kept separate in style tags and/or CSS files.

The ability to keep the styles separate from the HTML does not invalidate support for inline styles. The browser supports both, and probably always will.

Likewise, arguing for the benefits of this library is not in any way meant to disparage the usefulness of the current prevailing orthodoxy of including the binding / formatting instructions in the markup. I would be delighted to see the template instantiation proposal, with support for inline binding, added to the arsenal of tools developers could use. Should that proposal come to fruition, this library, hovering under 1KB, would be in mind-share competition with one that is 0KB, with the full backing / optimization work of Chrome, Safari, Firefox. Why would anyone use this library then?

And in fact, the library described here is quite open ended. Until template instantiation becomes built into the browser, this library could be used as a tiny stand-in. Once template instantiation is built into the browser, this library could continue to supplement the native support (or the other way around, depending.)

For example, in the second example above, the core "init" function described here has nothing special to offer in terms of string interpolation, since CSS matching provides no help:

<div>Hello {{Name}}</div>

We provide a small helper function "interpolate" for this purpose, but as this is a fundamental use case for template instantiation, and as this library doesn't add much "value-add" for that use case, native template instantiation could be used as a first round of processing. And where it makes sense to tightly couple the binding to the template, use it there as well, followed by a binding step using this library. Just as use of inline styles, supplemented by css style tags/files (or the other way around) is something seen quite often.

A question in my mind, is how does this rendering approach fit in with web components (I'm going to take a leap here and assume that HTML Modules / Imports in some form makes it into browsers, even though I think the discussion still has some relevance without that).

I think this alternative approach can provide value, by providing a process for "Pipeline Rendering": Rendering starts with an HTML template element, which produces transformed markup using init or native template instantiation. Then consuming / extending web components could insert additional bindings via the CSS-matching transformations this library provides.

To aid with this process, the init and update functions provide a rendering options parameter, which contains an optional "initializedCallback" and "updatedCallback" option. This allows a pipeline processing sequence to be set up, similar in concept to Apache Cocoon.

NB In re-reading the template instantiation proposal with a fresh set of eyes, I see now that there has in fact been some careful thought given to the idea of providing a kind of pipeline of binding. And as mentioned above, this library provides little help when it comes to string interpolation, so the fact that the proposal provides some hooks for callbacks is really nice to see.

I may not yet fully grasp the proposal, but it still does appear to me that the template instantiation proposal is only useful if one defines regions ahead of time in the markup where dynamic content may go.

This library, on the other hand, considers the entire template document open for amendment. This may be alarming, if as me, you find yourself comparing this effort to the constructible stylesheet proposal, where authors need to specify which elements can be themed.

However, the use case is quite different. In the case of stylesheets, we are talking about global theming, affecting large numbers of elements at the same time. The use case I'm really considering is one web component extending another. It doesn't seem that unreasonable to provide maximum flexibility in that circumstance. Yes, I suppose the ability to mark some tags as "undeletable / non negotiable" might be nice, but I see no way to enforce that.

Client-side JS faster than SSR?

Another interesting case to consider is this Periodic Table Codepen example. Being what it is, it is no suprise that there's a lot of repetitive HTML markup needed to define the table.

An intriguing question, is this: Could this be the first known scenario in the history of the planet, where rendering time (including first paint) would be improved rather than degraded with the help of client-side JavaScript?

The proper, natural instinct of a good modern developer, including the author of the codepen, is to generate the HTML from a consise data format using a server-side language (pug).

But using this library, and cloning some repetitive templates on the client side, reduces download size from 16kb to 14kb, and may improve other performance metrics as well (results ambiguous.)

You can compare the two here: This link uses client-side trans-rendering. This link uses all static html