Package Exports
- lite-fifo
- lite-fifo/src/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (lite-fifo) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
lite-fifo
Lightweight and efficient Queue implementations
This package aims to provide zero-dependencies implementations of a queue, focusing on:
- memory footprint (RAM)
- efficiency (ops/sec)
The production code is dependency free. The only dependencies are for testing.
Installation
npm install lite-fifo
Usage
const { DynamicArrayQueue } = require('lite-fifo');
const queue = new DynamicArrayQueue();
queue.enqueue(123);
queue.enqueue(45);
queue.enqueue(67);
console.log(queue.toJSON());
// => [123, 45, 67]
const temp = queue.dequeue(); // holds 123
console.log(queue.toJSON());
// => [45, 67]
Common implementations and mistakes
Array + push + shift
A very common implementation of a queue looks like this:
class DynamicArrayQueue { /* DON'T USE THIS CODE, IT DOESN'T SCALE */
constructor() {
this._arr = [];
}
enqueue(item) {
this._arr.push(item);
}
dequeue() {
return this._arr.shift();
}
}
The time complexity of the dequeue
operation is O(n). At small scale - we wouldn't notice.
On a high scale, say 300000 items, this implementation would have only 5 (five!) ops per second. Complexity matters..
At the bottom line, this implementation is a mistake.
Linked List
A linked list implementation for a queue behaves very well in terms of time complexity: O(1).
On the memory side, the provided implementation, LinkedQueue
, introduces an optimization: instead of relying on a doubly-linked-list, it relies on a singly-linked-list.
However, even with this optimization, the memory footprint of LinkedQueue
is the highest (see the benchmark table below).
Better implementations
Linked List of Ring Buffers
A ring buffer, or a cyclic queue, is a bounded data structure that relies on an array. It's very fast, but bounded.
We can, however, introduce a new data structure named ChunkedQueue
, in which we create a LinkedQueue
with each item in it to be a cyclic queue.
It's not a generic queue, as it has a weakness, which is out of the scope of this page, so use with caution.
DynamicCyclicQueue
Same as a cyclic queue, but can exceed the initial length of the underlying array.
How? when it's full, the next enqueue
operation would trigger a re-order of the underlying array, and then would expand it with push
operations.
This process is O(1) amortized, and therefore this is a generic queue, and can be used in any scenario.
The Benchmark
Methodology
The scenario being checked is the following:
set P = 100000
enqueue 30P items
dequeue 5P
do 6 times:
enqueue 1P
dequeue 5P
Apply this scenario T times (set T=20) for every relevant class (see table below), measure RAM used and ops/sec.
Remove best and worst results (in terms of ops/sec), and take the average (mean) from the rest of the results.
Note: we took a very large value for P, otherwise complexity related issues won't come up.
Results
Class Name | Ops/Sec | RAM used (MB) |
---|---|---|
DynamicArrayQueue | 5 | 8 |
ChunkedQueue | 28307 | 28 |
DynamicCyclicQueue | 44864 | 102 |
LinkedQueue | 25815 | 143 |
Analysis
- The naive implementation,
DynamicArrayQueue
, is so slow that it can't be considered as an option - The fastest implementation is
DynamicCyclicQueue
, and has an average RAM usage - The default implementation of
ChunkedQueue
has the lowest RAM usage, with the second-fastest measure of ops/sec - The common
LinkedQueue
implementation is not the fastest one, even with O(1) time complexity, and it's the most wasteful in terms of RAM usage
Suggestions
- Use the provided
DynamicCyclicQueue
for a generic solution - For some cases, e.g. telemetry shipping,
ChunkedQueue
is better - very low memory footprint
License
MIT © Ron Klein