Package Exports
- @tensorflow-models/universal-sentence-encoder
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@tensorflow-models/universal-sentence-encoder) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Universal Sentence Encoder lite
The Universal Sentence Encoder (Cer et al., 2018) is a model that encodes text into 512-dimensional embeddings. These embeddings can then be used as inputs to natural language processing tasks such as sentiment classification and textual similarity analysis.
This module is a TensorFlow.js FrozenModel converted from the Universal Sentence Encoder lite (module on TFHub), a lightweight version of the original. The lite model is based on the Transformer (Vaswani et al, 2017) architecture, and uses an 8k word piece vocabulary.
Usage
To import in npm:
import * as use from '@tensorflow-models/universal-sentence-encoder';or as a standalone script tag:
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/universal-sentence-encoder"></script>Then:
// Load the model.
const model = await use.load();
// Embed an array of sentences.
const sentences = [
'Hello.',
'How are you?'
];
const embeddings = await model.embed(sentences);
// `embeddings` is a 2D tensor consisting of the 512-dimensional embeddings for each sentence.
// So in this example `embeddings` has the shape [2, 512].
const verbose = true;
embeddings.print(verbose);