Package Exports
- question-answering
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (question-answering) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Question Answering for Node.js
Run question answering locally, directly in Node.js: no Python or C++ code needed!
Installation
npm install question-answering
Simple example
import { QAClient } from "question-answering";
const text = `
Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season.
The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.
As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
`;
const question = "Who won the Super Bowl?";
const qaClient = await QAClient.fromOptions();
const answer = await qaClient.predict(question, text);
console.log(answer); // { text: 'Denver Broncos', score: 0.37 }
Details
This package makes use of the tokenizers library (built with Rust) to process the input text. It then runs the DistilBERT model fine-tuned for Question Answering (86.9 F1 score on SQuAD v1.1 dev set, compared to 88.5 for BERT-base) thanks to TensorFlow.js. The default model and the vocabulary are automatically downloaded when installing the package and everything runs locally.
You can provide your own options when instantating a QAClient
:
const qaClient = await QAClient.fromOptions({
// model?: ModelOptions;
// tokenizer?: BertWordPieceTokenizer;
vocabPath: "../myVocab.txt"
});
Thanks to the native execution of SavedModel format in TFJS, the performance is similar to the one using TensorFlow in Python:
Inference latency of MobileNet v2 between native execution in Node.js against converted execution and core Python TF on both CPU and GPU