Package Exports
- json-portable-db
Readme
json-portable-db
High-performance file-backed JSON database with memory-first reads and atomic writes.
Stores all data as a single JSON file on disk. All reads are served from an in-memory Map (O(1) by id); writes are debounced and persisted atomically via tmp + rename.
Motivation
Why choose this?
In modern development, we often fall into the "Infrastructure Botnet" trap: small applications that require a complex network of external services (DBaaS, cloud clusters, managed providers) just to function. This introduces unnecessary network latency, hidden costs, and fragility.
- O(1) Reads: Serving data from an in-memory
Mapprovides effectively zero latency (~0ms), outperforming any networked database. - Zero Infra: No servers, no configurations, no ports. Your database is a single, human-readable JSON file.
- Total Portability: Perfect for local tools, CLIs, and apps where simplicity and speed are priority.
Install
npm install json-portable-dbQuick start
import { JsonPortableDB } from "json-portable-db";
type Repo = { id: number; name: string; status: string };
const db = new JsonPortableDB({ path: "./data/db.json", backup: true });
await db.connect();
const repos = db.collection<Repo>("repos");
repos.insert({ id: 1, name: "my-repo", status: "pending" });
await repos.upsert(1, { status: "completed" });
const repo = repos.get(1); // O(1) by id
const found = repos.findOne(1); // same, via overload
const byName = repos.findOne(r => r.name === "my-repo"); // O(n) scan
await db.flush(); // force immediate write
await db.disconnect(); // flushes pending changes before closingOptions
| Option | Type | Default | Description |
|---|---|---|---|
path |
string |
— | Path to the JSON file (created if missing). |
backup |
boolean |
false |
Keep up to 5 rotating backups (*.bak.1 … *.bak.5). |
debounceMs |
number |
200 |
Debounce interval for deferred writes (ms). |
API
JsonPortableDB
new JsonPortableDB(options: JsonPortableDBOptions)Extends EventEmitter. Emits:
"saved"—{ path: string }after each successful persist."error"— on I/O or serialization failures.
| Method | Description |
|---|---|
connect() |
Loads the file (or starts empty). Must be called before collection(). |
collection<T>(name) |
Returns a typed Collection<T>. Requires connect() first. |
flush() |
Forces an immediate write if there are pending changes. |
disconnect() |
Cancels the pending debounce and flushes before marking as disconnected. |
Collection<T extends { id: number }>
| Method / Prop | Description |
|---|---|
insert(doc) |
Inserts a shallow copy of doc. Throws on duplicate id. |
get(id) |
O(1) lookup by primary key. |
findOne(id) |
O(1) lookup by primary key (overload). |
findOne(predicate) |
O(n) scan when no index applies. |
upsert(id, patch) |
In-place Object.assign if exists; creates { id, ...patch } otherwise. |
entries() |
Iterator over all rows (no clone). |
size |
Number of rows in memory. |
Scale limits
| 🛡️ Current Status |
|---|
| Works correctly (Durable & Anti-Race Conditions). The performance limits are affirmed strictly by the number of elements. We are currently avoiding discussions regarding record sizes or the number of fields. |
[!IMPORTANT] The performance ranges below are based strictly on the number of elements (record count). We are currently avoiding considerations regarding the size of individual records or the number of fields.
| Range | Behavior |
|---|---|
| ≤ 10 000 records | Comfortable zone for connect, Map reads, and periodic flush. |
| 10k – 50k | Still viable; a console.warn is emitted to monitor load and serialization times. |
| > ~50 000 | High risk of memory pressure and slow JSON.stringify/parse; a warning recommends migrating to SQLite. |
When you need multiple indexes, partial queries without loading everything, or concurrent writers, consider SQLite (better-sqlite3, sql.js, etc.).
See PERFORMANCE.md for a detailed rationale of the design decisions.
Requirements
- Node.js ≥ 18
License
MIT