Package Exports
- bulksearch
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (bulksearch) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
BulkSearch
Superfast lightweight full text search engine.
Searching full text with BulkSearch is up to 1,000 times faster than ElasticSearch implementation.
Benchmark Comparison: https://jsperf.com/bulksearch
All Features:
- Partial Words
- Multiple Words
- Flexible Word Order
- Phonetic Search
- Limit Results
- Caching
- Asynchronous Mode
- Custom Matchers
- Custom Encoders
Plugins In Progress:
- Common Phonetic Encoders:
- Soundex
- Cologne
- Metaphone
- Caverphone
- Levinshtein
- Hamming
- Matchrating
- NGram
- Dedicated Memory (Worker)
Installation
Node.js
npm install bulksearch
In your code include as follows:
var BulkSearch = require("BulkSearch");
HTML / Javascript
<html>
<head>
<script src="https://cdn.rawgit.com/nextapps-de/bulksearch/dist/bulksearch.min.js"></script>
</head>
...
AMD
var BulkSearch = require("BulkSearch");
Usage (API)
Create a new index
var index = new BulkSearch();
alternatively you can also use:
var index = BulkSearch.create();
Create a new index with custom options
BulkSearch.create(OPTIONS)
var index = new BulkSearch({
// default values:
type: "integer",
encode: "icase",
boolean: "and",
strict: false,
ordered: false,
multi: false,
cache: false
});
Add item to the index
Index.add_(ID, TEXT)
index.add(10025, "John Doe");
Note: The data type of passed IDs has to be specified on creation. It is recommended to uses to most lowest possible data range here, e.g. use "short" when IDs are not higher than 65,535.
ID Type | Range of Values | Memory of Index each ~ 100.000 Words |
Byte | 0 - 255 | 683 kb |
Short | 0 - 65,535 | 1.3 Mb |
Integer | 0 - 4,294,967,295 | 2,7 Mb |
Float | 0 - * (16 digits) | 5,3 Mb |
String | (unlimited) | 1,3 Mb * char count of IDs |
Search items
Index.search(TEXT, LIMIT, CALLBACK)
index.search("John");
limit the result
index.search("John", 10);
perform queries asynchronously
index.search("John", function(result){
// array of results
});
Update item to the index
Index.update(ID, TEXT)
index.update(10025, "Road Runner");
Remove item to the index
Index.remove(ID)
index.remove(10025);
Optimize/Cleanup the index
Index.cleanup()
index.cleanup();
Destroy the index
Index.destroy()
index.destroy();
Initialize the index
Index.init(OPTIONS)
index.init();
Add custom matcher
Index.addMatcher(KEY_VALUE_PAIRS)
index.addMatcher({
'ä': 'a', // replaces all 'ä' to 'a'
'ö': 'o',
'Ü': 'u'
});
Add custom encoder
var index = new BulkSearch({
encode: function(str){
// do something with str ...
return str;
}
});
Get info
Index.info()
index.info();
Returns information about the index, e.g.:
{
"bytes": 3600356288,
"chunks": 9,
"fragmentation": 0,
"fragments": 0,
"id": 0,
"length": 7798,
"matchers": 0,
"size": 10000,
"status": false
}
Note: When the fragmentation value is about 50% or higher, your should consider using cleanup() to free all fragmented available memory.
Optimize / Cleanup index
Index.cleanup()
index.cleanup();
Calculate RAM
The required RAM per instance can be calculated as follow:
BYTES = CONTENT_CHAR_COUNT * (BYTES_OF_ID + 2)
Character count may be less related to the phonetic settings (e.g. when using soundex).
Author BulkSearch: Thomas Wilkerling
License: Apache 2.0 License