JSPM

  • Created
  • Published
  • Downloads 439239
  • Score
    100M100P100Q180687F
  • License Apache-2.0

World's fastest and most memory efficient full text search library.

Package Exports

  • flexsearch

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (flexsearch) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme


Search Library

World's fastest and most memory efficient full text search library with zero dependencies.

When it comes to raw search speed FlexSearch outperforms every single searching library out there and also provides flexible search capabilities like multi-word matching, phonetic transformations or partial matching. It also has the most memory-efficient index. Keep in mind that updating / removing existing items from the index has a significant cost. When your index needs to be updated continuously then BulkSearch may be a better choice. FlexSearch also provides you a non-blocking asynchronous processing model as well as web workers to perform any updates on the index as well as queries through dedicated threads.

Comparison:

Supported Platforms:

  • Browser
  • Node.js

Supported Module Definitions:

  • AMD (RequireJS)
  • CommonJS (Node.js)
  • Closure (Xone)
  • Global (Browser)

All Features:

  • Web-Worker Support (not available in Node.js)
  • Partial Matching
  • Multiple Words
  • Phonetic Search
  • Relevance-based Scoring
  • Contextual Indexes
  • Limit Results
  • Caching
  • Asynchronous Mode
  • Custom Matchers
  • Custom Encoders

FlexSearch introduce a new scoring mechanism called Contextual Search which was invented by Thomas Wilkerling, the author of this library. A Contextual Search incredibly boost up queries to a complete new level. The basic idea of this concept is to limit relevance by its context instead of calculating relevance through the whole (unlimited) distance. Imagine you add a text block of some sentences to an index ID. Assuming the query includes a combination of first and last word from this text block, are they really relevant to each other? In this way contextual search also improves the results of relevance-based queries on large amount of text data.

Note: This feature is actually not enabled by default.

Web-Worker Support

Workers get its own dedicated memory. Especially for larger indexes, web worker improves speed and available memory a lot. FlexSearch index was tested with a 250 Mb text file including 10 Million words. The indexing was done silently in background by multiple parallel running workers in about 7 minutes. The final index reserves ~ 8.2 Mb memory/space. The search result took ~ 0.25 ms.

Note: It is slightly faster to use no web worker when the index or query isn't too big (index < 500,000 words, query < 25 words).

Compare BulkSearch vs. FlexSearch

Description BulkSearch FlexSearch
Access Read-Write optimized index Read-Memory optimized index
Memory Large: ~ 5 Mb per 100,000 words Tiny: ~ 100 Kb per 100,000 words
Usecase
  • Limited content
  • Use when existing items of the index needs to be updated continously (update, remove)
  • Supports pagination
  • Fastest possible search
  • Existing items of the index does not need to be updated continously (update, remove)
  • Max out memory capabilities
Pagination Yes No
Ranked Searching No Yes
Contextual Index No Yes
WebWorker No Yes

Installation

HTML / Javascript
<html>
<head>
    <script src="js/flexsearch.min.js"></script>
</head>
...

Note: Use flexsearch.min.js for production and flexsearch.js for development.

Use latest from CDN:

<script src="https://cdn.rawgit.com/nextapps-de/flexsearch/master/flexsearch.min.js"></script>
Node.js
npm install flexsearch

In your code include as follows:

var FlexSearch = require("flexsearch");

Or pass in options when requiring:

var index = require("flexsearch").create({/* options */});

AMD

var FlexSearch = require("./flexsearch.js");

API Overview

Global methods:

Index methods:

Usage

Create a new index

FlexSearch.create(<options>)

var index = new FlexSearch();

alternatively you can also use:

var index = FlexSearch.create();
Create a new index with custom options
var index = new FlexSearch({

    // default values:

    encode: "icase",
    mode: "ngram",
    async: false,
    cache: false
});

Read more: Phonetic Search, Phonetic Comparison, Improve Memory Usage

Add items to an index

Index.add_(id, string)

index.add(10025, "John Doe");

Search items

Index.search(string|options, <limit>, <callback>)

index.search("John");

Limit the result:

index.search("John", 10);

Perform queries asynchronously:

index.search("John", function(result){
    
    // array of results
});

Update item to the index

Index.update(id, string)

index.update(10025, "Road Runner");

Remove item to the index

Index.remove(id)

index.remove(10025);

Reset index

index.reset();

Destroy the index

index.destroy();

Re-Initialize index

Index.init(<options>)

Note: Re-initialization will also destroy the old index!

Initialize (with same options):

index.init();

Initialize with new options:

index.init({

    /* options */
});

Add custom matcher

FlexSearch.addMatcher({REGEX: REPLACE})

Add global matchers for all instances:

FlexSearch.addMatcher({

    'ä': 'a', // replaces all 'ä' to 'a'
    'ó': 'o',
    '[ûúù]': 'u' // replaces multiple
});

Add private matchers for a specific instance:

index.addMatcher({

    'ä': 'a', // replaces all 'ä' to 'a'
    'ó': 'o',
    '[ûúù]': 'u' // replaces multiple
});

Add custom encoder

Define a private custom encoder during creation/initialization:

var index = new FlexSearch({

    encode: function(str){
    
        // do something with str ...
        
        return str;
    }
});

Register a global encoder to be used by all instances

FlexSearch.register(name, encoder)

FlexSearch.register('whitespace', function(str){

    return str.replace(/ /g, '');
});

Use global encoders:

var index = new FlexSearch({ encode: 'whitespace' });

Call encoders directly

Private encoder:

var encoded = index.encode("sample text");

Global encoder:

var encoded = FlexSearch.encode("whitespace", "sample text");
Mixup/Extend multiple encoders
FlexSearch.register('mixed', function(str){
  
    str = this.encode("icase", str);  // built-in
    str = this.encode("whitespace", str); // custom
    
    return str;
});
FlexSearch.register('extended', function(str){
  
    str = this.encode("custom", str);
    
    // do something additional with str ...

    return str;
});

Get info

index.info();

Returns information about the index, e.g.:

{
    "bytes": 3600356288,
    "id": 0,
    "matchers": 0,
    "size": 10000,
    "status": false
}

Chaining

Simply chain methods like:

var index = FlexSearch.create()
                      .addMatcher({'â': 'a'})
                      .add(0, 'foo')
                      .add(1, 'bar');
index.remove(0).update(1, 'foo').add(2, 'foobar');

Enable Contextual Index

Create context-enabled index and also set the limit of relevance (depth):

var index = new FlexSearch({

    encode: "icase",
    mode: "strict",
    depth: 3
});

Use WebWorker (Browser only)

Create worker-enabled index and also set the count of parallel threads:

var index = new FlexSearch({

    encode: "icase",
    mode: "full",
    async: true,
    worker: 4
});

Adding items to worker index as usual (async enabled):

index.add(10025, "John Doe");

Perform search and simply pass in callback like:

index.search("John Doe", function(results){

    // do something with array of results
});

Options

FlexSearch ist highly customizable. Make use of the the right options can really improve your results as well as memory economy or query time.

Option Values Description
mode




"strict"
"foward"
"reverse"
"ngram"
"full"
The indexing mode (tokenizer).
encode





false
"icase"
"simple"
"advanced"
"extra"
function()
The encoding type.

Choose one of the built-ins or pass a custom encoding function.
cache

true
false
Enable/Disable caching.
async

true
false
Enable/Disable asynchronous processing.
worker

false
{number}
Enable/Disable and set count of running worker threads.
depth

false
{number}
Enable/Disable contextual indexing and also sets relevance depth (experimental).

Tokenizer

Tokenizer effects the required memory also as query time and flexibility of partial matches. Try to choose the most upper of these tokenizer which fits your needs:

Option Description Example Memory Factor (n = length of word)
"strict" index whole words foobar * 1
"ngram" (default) index words partially through phonetic n-grams foobar
foobar
* n / 3.5
"foward" incrementally index words in forward direction foobar
foobar
* n
"reverse" incrementally index words in both directions foobar
foobar
* 2n - 1
"full" index every possible combination foobar
foobar
* n * (n - 1)

Phonetic Encoding

Encoding effects the required memory also as query time and phonetic matches. Try to choose the most upper of these encoders which fits your needs, or pass in a custom encoder:

Option Description False-Positives Compression
false Turn off encoding no no
"icase" (default) Case in-sensitive encoding no no
"simple" Phonetic normalizations no ~ 7%
"advanced" Phonetic normalizations + Literal transformations no ~ 35%
"extra" Phonetic normalizations + Soundex transformations yes ~ 60%
function() Pass custom encoding: function(string):string

Comparison (Matches)

Reference String: "Björn-Phillipp Mayer"

Query iCase Simple Advanced Extra
björn yes yes yes yes
björ yes yes yes yes
bjorn no yes yes yes
bjoern no no yes yes
philipp no no yes yes
filip no no yes yes
björnphillip no yes yes yes
meier no no yes yes
björn meier no no yes yes
meier fhilip no no yes yes
byorn mair no no no yes
(false positives) no no no yes

Memory Usage

The required memory for the index depends on several options:

Encoding Memory usage of every ~ 100,000 indexed word
false 260 kb
"icase" (default) 210 kb
"simple" 190 kb
"advanced" 150 kb
"extra" 90 kb
Mode Multiplied with: (n = average length of indexed words)
"strict" * 1
"ngram" (default) * n / 3.5
"forward" * n
"reverse" * 2n - 1
"full" * n * (n - 1)
Contextual Index Multiply the sum above with:
* (depth * 2 + 1)

Example Options

Memory-optimized:

{
    encode: "extra",
    mode: "strict",
    threshold: 5
}

Speed-optimized:

{
    encode: "icase",
    mode: "strict",
    threshold: 5,
    depth: 2
}

Matching-tolerant:

{
    encode: "extra",
    mode: "full"
}

Balanced:

{
    encode: "simple",
    mode: "ngram",
    threshold: 3,
    depth: 3
}

Author FlexSearch: Thomas Wilkerling
License: Apache 2.0 License