JSPM

vitreousdatabase

0.2.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 14
  • Score
    100M100P100Q68779F
  • License MIT

A JSON-file-backed non-relational database module for Node.js

Package Exports

  • vitreousdatabase
  • vitreousdatabase/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (vitreousdatabase) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

VitreousDataBase

npm version npm downloads license

A lightweight, file-backed non-relational database for Node.js. No external dependencies — data is stored as a JSON file on disk, with schema validation, constraints, and nested object support built in.


Requirements

  • Node.js >= 18.0.0

Installation

Copy the module into your project or install it locally:

# from a local path
npm install vitreousdatabase

Then require it:

const { Database } = require('vitreousdatabase');

Quick start

const { Database } = require('vitreousdatabase');

async function main() {
  // Opens (or creates) the database file
  const db = await Database.create('./mydata.json');

  // 1. Define the schema for an entity
  await db.entityManager.createEntity('users', {
    type: 'table',
    id: ['id'],
    values: ['id', 'username', 'email'],
    notnullable: ['username'],
    unique: ['email'],
  });

  // 2. Insert a record
  const user = await db.recordManager.insert('users', {
    id: 1,
    username: 'alice',
    email: 'alice@example.com',
  });
  console.log(user); // { id: 1, username: 'alice', email: 'alice@example.com' }

  // 3. Find by id
  const found = await db.recordManager.findByIdSingle('users', 1);
  console.log(found.username); // 'alice'

  // 4. Update
  await db.recordManager.update('users', { id: 1 }, { username: 'alice_b' });

  // 5. Delete
  await db.recordManager.deleteRecord('users', { id: 1 });
}

main();

The file mydata.json is created automatically if it does not exist.


Concepts

Database file

All data is stored in a single JSON file:

{
  "entitiesConfiguration": { },
  "entities": { }
}
  • entitiesConfiguration — the schema registry: one entry per entity, describing its fields and constraints.
  • entities — the data storage: one array per table entity, each element is a record.

Entity types

Type Description
"table" A standalone collection of records in the main file. Supports insert, find, update, delete.
"object" A reusable nested structure. Cannot be inserted directly — used only as a field inside another entity.
"subdatabase" A scoped multi-record container stored in a separate sidecar file. Has its own id and values, and can declare child entities (subEntities) whose records live alongside it in the same file. Use for logically grouped records that should scale independently.
"sharded" A partitioned container: records are split across one file per shardKey tuple. Each shard file has its own records plus any child-entity records scoped to that shard. Use when a single entity grows too large for the main file, and queries can be partitioned by a key (e.g. users by country).

Entity configuration fields

Field Required for Description
type all "table", "object", "subdatabase", or "sharded"
values all All field names the entity is allowed to have
id table, subdatabase, sharded Field names that identify a record. Auto-added to notnullable. Uniqueness is enforced as a composite tuple. At least one required.
notnullable optional Fields that cannot be null or undefined when saving
unique table, subdatabase, sharded Fields whose value must be unique. On sharded, every unique field must also be in shardKey (cross-shard uniqueness is not enforced).
nested optional Fields whose value is a nested object (must match a registered "object" entity)
shardKey required for sharded ≥1 field used to partition records across shard files. Must be a subset of id (so id lookups resolve a single shard). shardKey fields are auto-added to notnullable.
subEntities optional on subdatabase/sharded Map of child entity configs (same shape as a top-level config). v1 children must be table or object. Child table records live inside the same sidecar file as the container.

Note: id (and therefore shardKey) fields are immutable after insert — they cannot be changed via update().

Sidecar files. subdatabase and sharded entities store their data in a sidecar directory next to the main file: <dbfile>.vdb/. A subdatabase becomes one file (<name>.json); a sharded entity becomes a directory (<name>/) containing a manifest.json plus one file per shard tuple. Legacy databases with only table/object entities are unchanged — no sidecar directory is created.


Schema management

createEntity(name, config)

Registers a new entity. Object-type entities must be created before any entity that references them in nested.

// Register the nested type first
await db.entityManager.createEntity('address', {
  type: 'object',
  values: ['street', 'city', 'zip'],
  notnullable: ['city'],
});

// Then register the table that uses it
await db.entityManager.createEntity('customers', {
  type: 'table',
  id: ['id'],
  values: ['id', 'name', 'email', 'address'],
  notnullable: ['name'],
  unique: ['email'],
  nested: ['address'],     // 'address' must already exist as type "object"
});

Subdatabase entity

Registers a container that lives in its own sidecar file and can host child entities:

await db.entityManager.createEntity('app', {
  type: 'subdatabase',
  values: ['name', 'version'],
  id: ['name'],
  subEntities: {
    log: { type: 'table', values: ['id', 'entry'], id: ['id'] },
  },
});

await db.recordManager.insert('app', { name: 'vdb', version: '1.0' });
await db.recordManager.insert('app.log', { id: 1, entry: 'started' });
// Both records live in <dbfile>.vdb/app.json

Sharded entity

Registers a partitioned container. Records are split across one file per shardKey tuple:

await db.entityManager.createEntity('countries', {
  type: 'sharded',
  values: ['code', 'name', 'continent'],
  id: ['code'],
  shardKey: ['code'],
  subEntities: {
    person: {
      type: 'table',
      values: ['personId', 'firstName'],
      id: ['personId'],
      unique: ['firstName'],   // shard-local uniqueness
    },
  },
});

await db.recordManager.insert('countries', { code: 'US', name: 'United States', continent: 'NA' });
await db.recordManager.insert('countries', { code: 'IT', name: 'Italy',         continent: 'EU' });
// Creates <dbfile>.vdb/countries/<shardFile>.json per country, plus manifest.json

Rules for sharded:

  • shardKey is required and must be non-empty.
  • id ⊇ shardKey — every shardKey field must also be in id (so id lookups can resolve a single shard).
  • unique ⊆ shardKey — every field declared unique must also be in shardKey (uniqueness is enforced shard-locally, not globally).
  • shardKey fields cannot be nested (they must be primitive-comparable).
  • shardKey fields are auto-added to notnullable.

getEntity(name)

Returns the configuration object for an entity.

const config = await db.entityManager.getEntity('customers');
console.log(config.values); // ['id', 'name', 'email', 'address']

listEntities(type?)

Returns an array of entity names, optionally filtered by type.

const tables  = await db.entityManager.listEntities('table');
const objects = await db.entityManager.listEntities('object');
const all     = await db.entityManager.listEntities();

deleteEntity(name)

Removes an entity and, for table entities, all its records.

await db.entityManager.deleteEntity('customers');

Deleting an "object" entity that is still referenced by a table throws EntityInUseError.

addField(entityName, fieldName)

Adds a new optional field to an entity's schema. Existing records are not modified — the field is implicitly absent (undefined) in them.

const config = await db.entityManager.addField('users', 'phone');
// config is the updated entity configuration (deep clone)
// Existing records still work — 'phone' is absent but allowed

Returns the updated entity config as a deep clone. Mutating the returned object has no effect on the stored schema.

  • Throws InvalidMigrationError if the field already exists in values.
  • Throws EntityNotFoundError if the entity does not exist.
  • fieldName is not type-checked — passing a non-string value produces undefined behavior.

removeField(entityName, fieldName)

Removes a field from an entity's schema and strips it from all existing records. The field is also removed from notnullable, unique, and nested if present.

await db.entityManager.removeField('users', 'legacyFlag');
// 'legacyFlag' is gone from the schema and deleted from every record

Returns undefined.

  • Throws InvalidIdError if fieldName is one of the entity's id fields (id fields are immutable).
  • Throws InvalidMigrationError if the field is not in values.
  • Throws EntityNotFoundError if the entity does not exist.
  • fieldName is not type-checked — passing a non-string value produces undefined behavior.

addConstraint(entityName, constraint, fields)

Adds a 'notnullable' or 'unique' constraint to one or more fields. Before persisting, a safety check scans existing records and throws if any would violate the new constraint — the schema is never left in an inconsistent state.

// Add notnullable — safe only if no existing record has null for 'email'
await db.entityManager.addConstraint('users', 'notnullable', ['email']);

// Add unique — safe only if all existing 'email' values are already distinct
await db.entityManager.addConstraint('users', 'unique', ['email']);
  • Throws NullConstraintError if adding 'notnullable' and any existing record has null/undefined for the constrained field.
  • Throws UniqueConstraintError if adding 'unique' and any two existing records have the same non-null value for the constrained field. Records where the field is null or undefined are not considered duplicates — addConstraint will succeed even if many records have no value for the field.
  • Throws InvalidMigrationError if the constraint type is unknown, if a field is not in values, or if 'unique' is requested on an "object" entity.
  • Throws EntityNotFoundError if the entity does not exist.
  • Passing an empty fields array ([]) is a no-op — the method writes the schema unchanged and returns undefined without error.

Returns undefined.


CRUD operations

Every CRUD method accepts either a plain entity name ("users") or a dotted path ("countries.person") to target a child of a subdatabase / sharded container.

Sharded-child operations (e.g. countries.person) require a scope option carrying the parent's shardKey values:

await db.recordManager.insert(
  'countries.person',
  { personId: 1, firstName: 'Alice' },
  { scope: { code: 'US' } },   // scope = parent shardKey values → selects the shard file
);

See Subdatabase and sharded containers below for a full walkthrough.

insert(entityName, record, options?)

Inserts a new record. All validation rules are applied.

const order = await db.recordManager.insert('orders', {
  orderId: 101,
  customerId: 1,
  total: 59.90,
});
  • For a top-level sharded entity, the record itself must contain every shardKey field — the shard file is resolved from the record.
  • For a sharded-child path (parent.child), pass { scope: { <parentShardKeyField>: value, ... } } — missing scope throws ShardKeyError.

findById(entityName, idObject, options?)

Looks up a record using an id object. Works for both single and composite ids. Key order does not matter.

// Single id
const customer = await db.recordManager.findById('customers', { id: 1 });

// Composite id
const line = await db.recordManager.findById('orderLines', { orderId: 101, lineId: 3 });

Returns the record, or null if not found. Throws EntityTypeError if called on an "object" entity.

  • idObject must contain all declared id fields — throws InvalidIdError if any are missing or if it contains a non-id key.
  • For a top-level sharded entity, idObject automatically carries shardKey values (since id ⊇ shardKey), so no scope is needed — the correct shard is resolved from the id. If no shard file has been created for that tuple yet, returns null.
  • For a sharded-child path, pass { scope: { <parentShardKeyField>: value } }. Without scope, throws ShardKeyError.

Note: Lookup uses strict === comparison. findById('items', { id: '1' }) will not match a record with id: 1 (number). The type of the value passed must match the type stored in the record. Note that NaN === NaN is false in JavaScript, so findById with { id: NaN } will never find a record even if one was inserted with id: NaN.

O(1) lookups in eager mode. The first findById for a given scope (table, subdatabase, or single shard file) lazily builds an in-memory id index keyed by the composite id tuple. Subsequent lookups in the same scope are O(1). Inserts update the index incrementally; deletes invalidate the scope's index so the next lookup rebuilds it. The index lives only in eager mode and only in memory — it is never persisted.

findByIdSingle(entityName, value, options?)

Convenience shorthand for entities with exactly one id field.

const customer = await db.recordManager.findByIdSingle('customers', 1);

Returns null if no record matches. Throws InvalidIdError if the entity has a composite id. Throws EntityTypeError if called on an "object" entity. Accepts options.scope for sharded-child entities (same semantics as findById).

Note: Same strict === comparison as findByIdfindByIdSingle('users', '1') will not match a record with id: 1 (number).

findAll(entityName, options?)

Returns all records for an entity. Throws EntityTypeError if called on an "object" entity.

const allCustomers = await db.recordManager.findAll('customers');
  • For a sharded entity, findAll fans out across every registered shard file.
  • For a sharded-child path, findAll fans out across every parent shard by default. Pass { scope: { <parentShardKeyField>: value } } to restrict to a single parent shard.

findWhere(entityName, predicate, options?)

Filters records. Accepts a function predicate, a plain object (exact deep match), or a plain object with query operators.

// Function predicate — full power, supports nested access
const rich = await db.recordManager.findWhere('customers', r => r.total > 1000);
const milaneseByFn = await db.recordManager.findWhere('customers', r => r.address?.city === 'Milano');

// Plain object — deep equality, supports nested fields
const alices = await db.recordManager.findWhere('customers', { name: 'Alice' });
const milanese = await db.recordManager.findWhere('customers', { address: { city: 'Milano' } });

// Query operators — composable and programmatic
const expensive = await db.recordManager.findWhere('orders', { total: { $gt: 100 } });
const active    = await db.recordManager.findWhere('orders', { status: { $in: ['new', 'pending'] } });
const range     = await db.recordManager.findWhere('orders', { total: { $gte: 50, $lte: 200 } });
const noAddress = await db.recordManager.findWhere('orders', { address: { $exists: false } });

Comparison operators (field-level):

Operator Semantics
$eq field === value
$ne field !== value
$gt field > value
$gte field >= value
$lt field < value
$lte field <= value
$in field value is in the array (uses ===)
$nin field value is not in the array
$exists true — field is not undefined; false — field is undefined

Multiple operators on the same field are combined with AND:

// total > 50 AND total < 300
await db.recordManager.findWhere('orders', { total: { $gt: 50, $lt: 300 } });

Logical operators (top-level keys):

Operator Semantics
$and Array of sub-predicates — all must match
$or Array of sub-predicates — at least one must match
$not Plain-object predicate — must NOT match
// $and
await db.recordManager.findWhere('orders', {
  $and: [{ total: { $gt: 100 } }, { status: { $ne: 'cancelled' } }],
});

// $or
await db.recordManager.findWhere('orders', {
  $or: [{ status: 'new' }, { status: 'pending' }],
});

// $not
await db.recordManager.findWhere('orders', {
  $not: { status: 'cancelled' },
});

// combining
await db.recordManager.findWhere('orders', {
  $and: [
    { $or: [{ status: 'new' }, { status: 'pending' }] },
    { total: { $gte: 50 } },
  ],
});

Throws EntityTypeError if called on an "object" entity. Throws TypeError for malformed operator objects (unknown operator, $in/$nin operand not an array, $and/$or operand not an array, $not operand not a plain object).

NaN behaviour with operators: comparison operators use === internally. Since NaN !== NaN, { field: { $eq: NaN } } never matches. Use a function predicate with Number.isNaN() to match NaN values explicitly: r => Number.isNaN(r.field).

Automatic shard pruning. For a top-level sharded entity with a plain-object predicate, if every shardKey field is pinned to a direct equality value (not an operator object), findWhere loads only the matching shard file instead of fanning out across all shards. Pruning is skipped if any shardKey field is absent, null/undefined, or wrapped in an operator. Function predicates never prune — use options.scope or an object predicate to get the fast path.

update(entityName, idObject, updates, options?)

Deep-merges updates into the existing record. Returns the updated record.

const updated = await db.recordManager.update('customers', { id: 1 }, {
  email: 'alice_new@example.com',
});

Nested object fields are merged recursively — only the provided keys are overwritten, the rest are preserved:

// Before: { id: 1, address: { street: 'Via Roma 1', city: 'Milano', zip: '20100' } }
await db.recordManager.update('customers', { id: 1 }, {
  address: { city: 'Torino' },
});
// After: { id: 1, address: { street: 'Via Roma 1', city: 'Torino', zip: '20100' } }
  • Throws EntityTypeError if called on an "object" entity.
  • id fields cannot be updated — throws InvalidIdError.
  • idObject must contain all declared id fields — throws InvalidIdError if any are missing or if it contains a non-id key.
  • Throws RecordNotFoundError if no record matches idObject.
  • All validation rules (notnullable, unique, unknown fields) apply to the merged result. Unknown fields in updates are caught by the unknown-field check and throw UnknownFieldError.
  • Array fields are replaced entirely, not merged element-by-element. Only plain objects are deep-merged recursively. [1, 2, 3] updated with [4] becomes [4], not [4, 2, 3].
  • null and undefined values are exempt from uniqueness checks. Multiple records may hold null for a field declared uniquenull is treated as "absent" rather than a comparable value.
  • undefined values in updates are dropped silently after the JSON round-trip. If a field in updates is undefined and that field is notnullable, validation will throw NullConstraintError. If it is not notnullable, the field will disappear from the stored record. Use null to explicitly clear a nullable field.
  • For sharded-child paths, pass { scope: { <parentShardKeyField>: value } }. Without scope, throws ShardKeyError. A wrong scope throws RecordNotFoundError (the record lives in a different shard).

deleteRecord(entityName, idObject, options?)

Removes a record and returns it.

const removed = await db.recordManager.deleteRecord('customers', { id: 1 });
  • Throws EntityTypeError if called on an "object" entity.
  • idObject must contain all declared id fields — throws InvalidIdError if any are missing or if it contains a non-id key.
  • Throws RecordNotFoundError if no record matches idObject.
  • For sharded-child paths, pass { scope: { <parentShardKeyField>: value } }. Without scope, throws ShardKeyError.

Subdatabase and sharded containers

subdatabase and sharded entities split data out of the main JSON file into a sidecar directory next to it. They address two different scaling problems:

Problem Solution
A single entity grows large and slows every unrelated read/write subdatabase — put it in its own file
A single entity grows so large the file itself becomes a bottleneck, but queries partition cleanly by some key sharded — split it across one file per shardKey tuple

Both can declare subEntities — child entities whose records live inside the same sidecar file as the container. Children are addressed with a dotted path ("parent.child") in every RecordManager method.

File layout

Given a main database at ./mydata.json, container data lives in:

mydata.json                       # main file (regular table/object entities)
mydata.json.vdb/                  # sidecar directory, created lazily
  settings.json                   # subdatabase 'settings' (one file, multi-record)
  countries/                      # sharded entity 'countries'
    manifest.json                 # { version, shards: { jsonKey -> filename } }
    code=US.json                  # one shard file per shardKey tuple
    code=IT.json
    ...

Each sidecar container file has the shape:

{
  "records":  [ /* the container's own records */ ],
  "entities": { "<childName>": [ /* subEntity child records */ ] }
}

Shard filenames are <field1>=<enc1>__<field2>=<enc2>.json. If the encoded name is too long or contains unsafe characters, VitreousDataBase falls back to sha1-<hex16>.json and records the mapping in manifest.json.

Subdatabase — single-file container

await db.entityManager.createEntity('settings', {
  type: 'subdatabase',
  values: ['key', 'val'],
  id: ['key'],
});

await db.recordManager.insert('settings', { key: 'theme', val: 'dark' });
await db.recordManager.insert('settings', { key: 'lang',  val: 'en'   });

const theme = await db.recordManager.findById('settings', { key: 'theme' });
const all   = await db.recordManager.findAll('settings');

subdatabase behaves like a regular table except that its records live in <dbfile>.vdb/<name>.json instead of the main file. All validation rules (id, notnullable, unique, nested) apply normally, scoped to that one file.

Sharded — per-shardKey partitioning

await db.entityManager.createEntity('countries', {
  type: 'sharded',
  values: ['code', 'name', 'continent'],
  id: ['code'],
  shardKey: ['code'],
});

// Shard file is resolved from the record itself (because the record carries shardKey)
await db.recordManager.insert('countries', { code: 'US', name: 'United States', continent: 'NA' });
await db.recordManager.insert('countries', { code: 'IT', name: 'Italy',         continent: 'EU' });

// findById: single-shard lookup (id includes shardKey → correct file resolved directly)
const us = await db.recordManager.findById('countries', { code: 'US' });

// findAll: fans out across every registered shard
const everything = await db.recordManager.findAll('countries');

// findWhere: auto-prunes to one shard when the predicate pins every shardKey field
const onlyUS = await db.recordManager.findWhere('countries', { code: 'US' });          // 1 file loaded
const allNA  = await db.recordManager.findWhere('countries', { continent: 'NA' });     // fans out

Child entities inside containers

subdatabase and sharded can host child table entities via subEntities. Each child's records are stored inside the same sidecar file as its parent instance.

await db.entityManager.createEntity('countries', {
  type: 'sharded',
  values: ['code', 'name'],
  id: ['code'],
  shardKey: ['code'],
  subEntities: {
    person: {
      type: 'table',
      values: ['personId', 'firstName'],
      id: ['personId'],
      unique: ['firstName'],
    },
  },
});

await db.recordManager.insert('countries', { code: 'US', name: 'United States' });
await db.recordManager.insert('countries', { code: 'IT', name: 'Italy' });

// Sharded-child inserts REQUIRE options.scope carrying the parent shardKey values
await db.recordManager.insert(
  'countries.person',
  { personId: 1, firstName: 'Alice' },
  { scope: { code: 'US' } },
);
await db.recordManager.insert(
  'countries.person',
  { personId: 1, firstName: 'Alice' },  // same id AND same firstName allowed in a different shard
  { scope: { code: 'IT' } },
);

// Scoped queries touch only one shard file:
const usOnly = await db.recordManager.findAll('countries.person', { scope: { code: 'US' } });

// Unscoped queries fan out across every shard:
const allPeople = await db.recordManager.findAll('countries.person');

Rules for child entities:

  • In v1, subEntities children must be "table" or "object". Nested subdatabase / sharded are not supported yet.
  • A child "table"'s nested refs must point at top-level "object" entities (not sibling children).
  • Uniqueness constraints on a sharded-child are shard-local: the same field value may exist in two different parent shards without conflict. Cross-shard uniqueness is not enforced.
  • subdatabase-child ops never need scope (there is only one sidecar file per subdatabase). sharded-child insert/update/delete/findById require scope; findAll/findWhere fan out when scope is omitted.

Limitations

  • Transactions do not support sidecar ops. db.transaction() operates on the main-file snapshot only. Calling any sub/sharded record operation inside a transaction throws ShardKeyError. Combine a transaction with separate post-transaction sidecar ops if you need to mix them.
  • Watch events fire on the dotted path. Subscribe with watch('countries.person', cb) to receive child-entity events.
  • Cross-shard uniqueness is not enforced. If you need global uniqueness on a sharded entity, include the constraint field in shardKey (so each shard contains at most one matching value).
  • Multi-process access is not safe for sidecar files, exactly like the main file. Use a single process per database.

Transactions

db.transaction(fn) runs multiple operations atomically. All operations share a forked in-memory snapshot. If fn resolves, a single atomic write commits everything. If fn throws, the snapshot is discarded and nothing is persisted. db.transaction() returns the value returned by fn.

await db.transaction(async (tx) => {
  await tx.recordManager.insert('orders',    { orderId: 1, customerId: 42, total: 99 });
  await tx.recordManager.insert('orderLines', { lineId: 1, orderId: 1, productId: 'P01' });
  // if either insert throws (e.g. unique constraint), neither is persisted
});

tx exposes the same entityManager and recordManager APIs as db:

await db.transaction(async (tx) => {
  // schema changes and data changes can be mixed
  await tx.entityManager.addField('products', 'discount');
  await tx.recordManager.update('products', { id: 1 }, { discount: 10 });
});

Reads inside the transaction see the transaction's own uncommitted writes:

await db.transaction(async (tx) => {
  await tx.recordManager.insert('items', { id: 1, qty: 10 });
  const r = await tx.recordManager.findByIdSingle('items', 1);
  console.log(r.qty); // 10 — visible within the transaction
});

Constraints:

  • Watch callbacks do not fire for operations inside a transaction (see Watch API).
  • Nested transactions are not supported — calling db.transaction() inside fn deadlocks.
  • Transactions are serialized through the same mutex as all other operations — a transaction blocks subsequent operations until it commits or rolls back.

Watch API

Subscribe to data changes on a table entity. The callback receives an event object every time a record is inserted, updated, or deleted.

const unsubscribe = db.recordManager.watch('orders', (event) => {
  if (event.type === 'insert') {
    console.log('New order:', event.record);
  } else if (event.type === 'update') {
    console.log('Order changed:', event.previous, '→', event.record);
  } else if (event.type === 'delete') {
    console.log('Order removed:', event.record);
  }
});

await db.recordManager.insert('orders', { orderId: 1, total: 50 });
// → fires: { type: 'insert', record: { orderId: 1, total: 50 } }

await db.recordManager.update('orders', { orderId: 1 }, { total: 75 });
// → fires: { type: 'update', record: { orderId: 1, total: 75 }, previous: { orderId: 1, total: 50 } }

await db.recordManager.deleteRecord('orders', { orderId: 1 });
// → fires: { type: 'delete', record: { orderId: 1, total: 75 } }

// Stop listening
unsubscribe();

Event shapes:

type Properties
'insert' record — the inserted record
'update' record — the record after update; previous — snapshot before update
'delete' record — the deleted record

record and previous are deep clones — mutating them has no effect on the database.

Behaviour:

  • watch() is synchronous — it returns the unsubscribe function immediately (not a Promise). It throws TypeError synchronously if callback is not a function.
  • Multiple watchers on the same entity are all called in registration order.
  • A watcher that throws is silently ignored. The write still completes and other watchers still fire.
  • Events fire only after the write succeeds. A failed operation (e.g. unique constraint violation) fires no event. Watchers can safely assume each event represents a committed change.
  • unsubscribe() is idempotent — calling it more than once is a safe no-op.
  • Watch is intra-process only — no event fires when another process modifies the file.
  • Operations inside db.transaction() do not fire watch callbacks.
  • Calling watch() on an "object" entity or a non-existent entity does not throw, but the callback will never fire. Always call unsubscribe() to clean up stale watchers.
  • Watchers registered on an entity survive deleteEntity(). If the entity is later recreated with the same name, old callbacks will fire again. Call unsubscribe() before deleting an entity to avoid this.

Nested objects

Fields declared in nested must be plain objects. Their structure is validated against the matching "object" entity configuration.

Convention: the field name listed in nested must exactly match the name of the registered "object" entity. For example, a field named "location" must be backed by an entity also called "location".

await db.entityManager.createEntity('location', {
  type: 'object',
  values: ['lat', 'lng'],
  notnullable: ['lat', 'lng'],
});

await db.entityManager.createEntity('stores', {
  type: 'table',
  id: ['storeId'],
  values: ['storeId', 'name', 'location'],
  nested: ['location'],   // field name 'location' → validated against the 'location' object entity
});

await db.recordManager.insert('stores', {
  storeId: 'S01',
  name: 'Central Store',
  location: { lat: 45.46, lng: 9.19 },
});

Nested objects:

  • Are validated for unknown fields and notnullable constraints.
  • Are subject to unique checks using deep equality (key order does not matter).
  • Cannot be used as id fields.
  • Can themselves contain further nested objects (multi-level nesting is supported).
  • Setting a nested field to null is valid for non-notnullable fields and explicitly clears it.

Naming constraint: because the field name must match the "object" entity name, it is not possible to have two fields of the same nested type within the same entity. For example, you cannot have both billingAddress and shippingAddress backed by a single "address" entity — each would require its own separately named "object" entity (e.g. "billingAddress" and "shippingAddress").

Update limitation: update() deep-merges nested objects but cannot remove individual keys from a nested object. Setting a key to null leaves it present as null (which may violate notnullable). To replace a nested object entirely, set the whole field to a new object; to clear it, set the field to null (only valid if the field is not notnullable).


Composite ids

When an entity has more than one id field, use findById with an object.

Composite id uniqueness is enforced as a tuple: only the full combination of id field values must be unique. Different records may share the value of individual id fields as long as the full combination differs.

await db.entityManager.createEntity('orderLines', {
  type: 'table',
  id: ['orderId', 'lineId'],
  values: ['orderId', 'lineId', 'productId', 'qty'],
});

await db.recordManager.insert('orderLines', { orderId: 1, lineId: 1, productId: 'P01', qty: 2 });
await db.recordManager.insert('orderLines', { orderId: 1, lineId: 2, productId: 'P02', qty: 1 });
// orderId: 1 appears in both records — valid because (orderId, lineId) tuples are distinct

const line = await db.recordManager.findById('orderLines', { orderId: 1, lineId: 2 });
// key order does not matter: { lineId: 2, orderId: 1 } works too

Two different comparison semantics apply to id fields. Lookups (findById, update, deleteRecord) use === — so findById with { id: NaN } never finds anything, and -0 matches +0. Uniqueness at insert time uses Object.is() — so two inserts with id: NaN collide, and id: -0 and id: +0 are accepted as distinct. Avoid NaN and -0 as id values to prevent these inconsistencies.


Eager mode

By default every read operation loads the file from disk and every write saves it immediately. For write-heavy scenarios within a single process, enable eager mode to keep everything in memory and flush manually.

const db = await Database.create('./mydata.json', { eager: true });

// All operations hit the in-memory cache — no disk I/O
await db.recordManager.insert('logs', { id: 1, msg: 'start' });
await db.recordManager.insert('logs', { id: 2, msg: 'end' });

// Persist to disk when ready
await db.flush();

// Or close (flushes automatically)
await db.close();

// Calling close() a second time is a safe no-op — it returns immediately without flushing or throwing.

Warning: Neither mode is safe when multiple processes share the same file. There is no cross-process file locking. The in-process mutex (_enqueue) only serializes operations within a single process. In eager mode data races can cause silent overwrites; in default mode, concurrent read-modify-write cycles between processes can still interleave and lose writes. Use an external coordination mechanism (e.g. a dedicated server process) in multi-process environments.

Eager mode data loss: the emergency sync flush on process.on('exit') is not invoked on SIGKILL (kill -9), OOM termination, or SIGTERM without an explicit handler. Call db.close() or db.flush() before your process exits to guarantee data is written. Alternatively, register your own SIGTERM/SIGINT handlers that call db.flush() before exiting.


Error handling

All errors extend VitreousError. Import specific classes to handle them precisely.

const {
  Database,
  VitreousError,
  EntityNotFoundError,
  UniqueConstraintError,
  NullConstraintError,
  FileAccessError,
} = require('vitreousdatabase');

try {
  await db.recordManager.insert('users', { id: 1, username: null });
} catch (e) {
  if (e instanceof NullConstraintError) {
    console.error(`Null value rejected — field: ${e.fieldName}`);
  } else if (e instanceof UniqueConstraintError) {
    console.error(`Duplicate value rejected — field: ${e.fieldName}, value: ${e.value}`);
  } else if (e instanceof VitreousError) {
    console.error(`Database error: ${e.message}`);
  } else {
    throw e;
  }
}

Full error reference

Class When thrown Extra properties
FileAccessError File path inaccessible, JSON is corrupt, or operation called after close() filePath, reason
EntityNotFoundError Entity name not in entitiesConfiguration entityName
EntityAlreadyExistsError createEntity called with an existing name entityName
EntityTypeError Operation requires "table" but got "object" (or vice versa) entityName, expected, actual
EntityInUseError Deleting an "object" entity still referenced by a table entityName, referencedBy
UnknownFieldError Record contains a field not listed in values entityName, fieldName
NullConstraintError A notnullable field is null or undefined entityName, fieldName
UniqueConstraintError A unique field value already exists in the data entityName, fieldName, value
NestedTypeError A nested field received a non-object value entityName, fieldName
InvalidIdError id field is also nested; object entity has id; findByIdSingle on composite id; idObject contains a non-id key or is empty; removeField attempted on an id field entityName, reason
CircularReferenceError Nested chain forms a cycle (including self-reference) entityName, cycle
RecordNotFoundError update or deleteRecord found no record matching idObject entityName, idObject
InvalidMigrationError addField/removeField/addConstraint called with an invalid argument (field already exists, field not found, unknown constraint type, unique on object entity) entityName, reason

Complete example

Below is a self-contained script that models a small shop with customers, addresses, and orders.

const { Database, UniqueConstraintError } = require('vitreousdatabase');

async function main() {
  const db = await Database.create('./shop.json');

  // --- Schema ---

  await db.entityManager.createEntity('address', {
    type: 'object',
    values: ['street', 'city', 'zip'],
    notnullable: ['city'],
  });

  await db.entityManager.createEntity('customers', {
    type: 'table',
    id: ['id'],
    values: ['id', 'name', 'email', 'address'],
    notnullable: ['name'],
    unique: ['email'],
    nested: ['address'],
  });

  await db.entityManager.createEntity('orders', {
    type: 'table',
    id: ['orderId'],
    values: ['orderId', 'customerId', 'total', 'status'],
    notnullable: ['customerId', 'total'],
  });

  // --- Insert ---

  await db.recordManager.insert('customers', {
    id: 1,
    name: 'Alice',
    email: 'alice@example.com',
    address: { street: 'Via Roma 1', city: 'Milano', zip: '20100' },
  });

  await db.recordManager.insert('customers', {
    id: 2,
    name: 'Bob',
    email: 'bob@example.com',
  });

  await db.recordManager.insert('orders', { orderId: 101, customerId: 1, total: 49.99, status: 'pending' });
  await db.recordManager.insert('orders', { orderId: 102, customerId: 1, total: 19.00, status: 'shipped' });
  await db.recordManager.insert('orders', { orderId: 103, customerId: 2, total: 99.50, status: 'pending' });

  // --- Query ---

  const alice = await db.recordManager.findByIdSingle('customers', 1);
  console.log(`Customer: ${alice.name} — city: ${alice.address?.city}`);

  const aliceOrders = await db.recordManager.findWhere('orders', { customerId: 1 });
  console.log(`Alice has ${aliceOrders.length} orders`);

  const pendingOrders = await db.recordManager.findWhere('orders', o => o.status === 'pending');
  console.log(`Pending orders: ${pendingOrders.length}`);

  // --- Update ---

  await db.recordManager.update('orders', { orderId: 101 }, { status: 'shipped' });

  // --- Unique constraint ---

  try {
    await db.recordManager.insert('customers', { id: 3, name: 'Eve', email: 'alice@example.com' });
  } catch (e) {
    if (e instanceof UniqueConstraintError) {
      console.log(`Rejected: ${e.message}`);
    }
  }

  // --- Delete ---

  await db.recordManager.deleteRecord('orders', { orderId: 103 });
  console.log(`Orders remaining: ${(await db.recordManager.findAll('orders')).length}`);
}

main().catch(console.error);

Running the tests

node --test test/*.test.js

Or using the npm script:

npm test

The test suite includes:

  • test/validator.test.js — unit tests for Validator.js
  • test/database.test.js — Database init and eager mode
  • test/entity.test.js — EntityManager integration
  • test/record.test.js — RecordManager integration
  • test/migration.test.js — addField, removeField, addConstraint
  • test/transaction.test.js — db.transaction() atomicity and rollback
  • test/watch.test.js — recordManager.watch() events and unsubscribe
  • test/bugs.test.js — regression tests for known bug fixes
  • test/edge_cases.test.js — boundary and edge case coverage
  • test/persistence.test.js — persistence and error property checks
  • test/integration.test.js — end-to-end scenarios
  • test/readme.test.js — verifies README examples work correctly

Known limitations

  • Limited schema migration. addField, removeField, and addConstraint cover common evolution patterns. Renaming a field, changing its type, or changing id composition still requires deleting and recreating the entity (destroying all its records).

  • No referential integrity across table entities. VitreousDataBase has no concept of foreign keys between table entities. Deleting a customers record leaves all orders records with a dangling customerId intact and undetectable. Cross-table consistency must be maintained by the application.

  • JSON-only values. All field values must be JSON-serializable. Non-finite numbers (NaN, Infinity, -Infinity) are rejected at validation time with a TypeError. BigInt values bypass validation and throw a raw TypeError from JSON.stringify inside insert() — not a VitreousError. Other non-serializable types (Date, RegExp, Map, Set, undefined) are not rejected but are silently corrupted by the JSON.parse(JSON.stringify(...)) round-trip: Date becomes an ISO string, RegExp/Map/Set become {}, and undefined fields are dropped. Use only plain JSON types: strings, numbers, booleans, null, plain objects, and arrays.

  • No composite unique constraints. The unique field in the entity config applies per-field only. There is no way to declare that a combination of non-id fields must be unique (e.g. categoryId + slug). If you need composite uniqueness, include those fields in id (which enforces composite tuple uniqueness) or enforce the constraint in application code.

  • undefined field values are silently dropped. A field with value undefined that is not in notnullable passes validation but disappears after the JSON round-trip. The returned record will have fewer keys than what was passed. Use null to explicitly store an absent value.

  • -0 is only normalized at the top level, and normalization runs on the full merged record during update(). normalizeMinusZero() converts -0 to 0 for top-level record fields before insert and update. Fields inside nested objects are not normalized — they may transiently hold -0 in memory. JSON serialization always converts -0 to 0, so the value on disk is always 0, but the in-memory representation inside an operation may differ. During update(), normalization is applied to the entire post-merge record, not just the patched fields — a pre-existing top-level -0 that was not part of the patch will also be converted to 0.

  • findWhere predicate errors are not wrapped. If the predicate function throws (e.g. accessing a property of null), the raw JavaScript error propagates uncaught — it is not wrapped in a VitreousError. Code that catches only VitreousError will not handle it.

  • Entity name format is not validated. createEntity requires a non-empty string for the name — passing '', null, or a non-string throws TypeError. However, beyond this basic check, the format is unconstrained: names containing spaces are accepted silently. The name __proto__ is handled safely (no prototype pollution), but other prototype property names (constructor, hasOwnProperty, toString, etc.) may produce undefined behavior and are not recommended.

  • Full file load on every operation (non-eager mode). In the default mode, each operation calls fs.readFile + JSON.parse on the entire database file. There is no pagination or streaming. For large datasets this becomes an O(n) memory allocation per operation. Use eager mode for read-heavy workloads on large files.

  • Circular reference DFS is exponential on deep diamond dependencies. detectCircularReference creates a fresh visited-set copy per branch, allowing shared nodes to be revisited once per path. For schemas with many levels of diamond-shaped nested dependencies (A→B, A→C, B→D, C→D, …), the work grows as O(2^n). In practice, nested schemas are shallow, so this is not a concern for typical usage.

  • update() with an empty {} patch triggers a write and fires a watch event. deepMerge(existingRecord, {}) produces an identical copy; validation passes; the file is written; watchers are notified. Guard with if (Object.keys(updates).length === 0) return if you want to skip no-op patches.

  • $and: [] matches every record; $or: [] matches no record. Empty-array behaviour follows JavaScript's Array.prototype.every / some: [].every(...) is true, [].some(...) is false. Avoid empty arrays unless you intend these semantics.

  • $not: {} never matches any record. deepMatch(record, {}) is always true, so { $not: {} } negates to false for every record.

  • $exists: false is unreliable after JSON round-trip. undefined field values are dropped by JSON.stringify. After a round-trip, a field set to undefined is indistinguishable from a field that was never written. Use $exists: true reliably; avoid relying on $exists: false for runtime logic.

  • insert() with a non-object record throws a raw TypeError or produces a corrupted record. Passing null or undefined throws TypeError: Cannot convert undefined or null to object. Passing a string produces a record keyed by character indices (e.g. { '0': 'h', '1': 'i' }), bypassing validation and corrupting the entity. Always pass a plain object.

  • flush() on a closed database is a silent no-op, not an error. After db.close(), calling flush() exits early without throwing — inconsistent with _read() and _write(), which both throw FileAccessError('database is closed'). Always use db.close() (which flushes internally) rather than calling flush() independently.

  • createEntity does not validate that values elements are strings. The array is verified as non-empty and duplicate-free, but individual elements are not type-checked. Passing values: [1, null, {}] is accepted silently; subsequent operations may misbehave because field-name comparisons use string equality.

  • $exists operator accepts any truthy/falsy value, not only booleans. { $exists: 1 } behaves as $exists: true; { $exists: 0 } behaves as $exists: false. No error is thrown for non-boolean operands.

  • Nested entity lookup in createEntity step 9 does not use hasOwnProperty. If a field in nested shares a name with an Object.prototype property (e.g. 'constructor'), the lookup finds the prototype method rather than undefined, producing EntityTypeError instead of EntityNotFoundError. Avoid using Object.prototype property names as nested field names.

  • Unknown query operators throw TypeError, not VitreousError. Passing an unrecognised operator key (e.g. { qty: { $between: [1, 5] } }) throws new TypeError('Unknown query operator: ...'). Code catching only VitreousError subclasses will not intercept this error.

  • removeField silently removes the field from notnullable, unique, and nested. If the removed field was also listed as a nested reference to an object entity, that reference is dropped without any warning or error.

  • addConstraint accepts duplicate entries in fields without error. Passing ['x', 'x'] runs validation twice for 'x' but adds it to the constraint array only once.

  • Opening a malformed JSON file does not throw immediately. Database.create() succeeds if the file is valid JSON, even if it lacks entitiesConfiguration or entities keys. Errors surface on the first actual operation.

  • watch() on a non-existent entity registers silently and never fires. watch() does not validate that the entity exists. The callback is stored internally but will never be invoked. Call unsubscribe() to clean it up.

  • watch() callbacks survive entity deletion. Calling deleteEntity('foo') does not remove watchers registered for 'foo'. If the entity is later recreated under the same name, those old callbacks will fire again. Always call the unsubscribe function before deleting an entity to avoid stale callbacks.

  • $and, $or, and $not cannot be used as field-level operators. They are processed only at the root of a deepMatch call. Using { field: { $and: [...] } } passes them to applyOperators, which throws TypeError('Unknown query operator: $and').

  • Mixing operator keys and plain field keys in a predicate throws TypeError. A predicate like { $gt: 5, name: 'foo' } that contains both $... keys and plain field keys throws TypeError('Query predicate cannot mix operator keys ($...) with plain field keys').

  • findWhere with an array as the predicate throws TypeError. Passing an array instead of a function or plain object throws TypeError('predicate must be a function or a plain object').

  • $not only accepts a plain object as its operand. Passing a function or an operator object (e.g. { $not: { $gt: 5 } }) throws TypeError('$not operand must be a plain object'). To negate a comparison, use $ne, $nin, or a function predicate.

  • addConstraint does not validate that fields is an array. Passing a string instead of an array (e.g. addConstraint('users', 'notnullable', 'name')) iterates over the string's individual characters, each of which will likely throw InvalidMigrationError for not being a known field name.

  • ID lookup uses ===, not Object.is(). findById, findByIdSingle, update, and deleteRecord match records using ===. This means -0 and +0 are treated as the same id in lookups, even though insert uses Object.is() for uniqueness (treating them as distinct). Avoid using -0 and +0 as id values.

  • createEntity silently deduplicates id, notnullable, unique, and nested arrays. Duplicate entries in values throw TypeError; duplicates in the other arrays are removed without warning.

  • findByIdSingle throws InvalidIdError for entities with a composite id. If an entity has more than one id field, findByIdSingle throws InvalidIdError. Use findById with a full idObject for composite-id entities.

  • addField cannot add a field to the nested list. There is no API to retroactively mark an existing field as a nested reference. To add a nested field, the entity must be deleted and recreated.

  • watch() on a closed database registers silently without error. watch() is synchronous and bypasses _enqueue, so it does not check whether the database has been closed. After db.close(), calling watch() succeeds — the callback is stored internally — but will never fire, because all subsequent writes fail with FileAccessError. No error is thrown at registration time.

  • addConstraint('unique') on an id field creates redundant and conflicting checks. There is no guard preventing addConstraint(entity, 'unique', ['idField']) on a field already declared in id. The id declaration enforces composite-tuple uniqueness; adding unique on an individual id field introduces a stricter per-field check. For composite-id entities, two records may legitimately share one id field value (the full tuple is still unique), but the per-field unique constraint will reject the second insert even though the composite id is valid.

  • update() with a non-object updates argument throws a raw TypeError. No type-check is performed on updates before it is passed to deepMerge. Passing null, a number, or a boolean causes Object.keys(updates) to throw a native TypeError that is not wrapped in a VitreousError. Passing a string deep-merges on character indices, silently producing a corrupted record. Always pass a plain object as updates.