JSPM

  • Created
  • Published
  • Downloads 54
  • Score
    100M100P100Q90846F

A RESP2 server with practical Redis compatibility, backed by SQLite

Package Exports

  • resplite
  • resplite/embed
  • resplite/migration

Readme

RESPLite

A RESP server backed by SQLite. Compatible with redis clients and redis-cli, persistent by default, zero external daemons, and minimal memory footprint.

Overview

RESPLite speaks RESP (the Redis Serialization Protocol), so your existing redis npm client and redis-cli work without changes. The storage layer is SQLite: WAL mode, FTS5 for full-text search, and a single .db file that survives restarts without snapshots or AOF.

It is not a Redis clone. It covers a practical subset of commands that map naturally to SQLite, suited for single-node workloads where Redis' in-memory latency is not a hard requirement.

  • Zero external services — just Node.js and a .db file.
  • Drop-in compatible — works with the official redis npm client and redis-cli.
  • Persistent by default — no snapshots, no AOF, no config.
  • Embeddable — start the server and connect from the same script.
  • Full-text search — FT.* commands via SQLite FTS5.
  • Simple queues — lists with BLPOP/BRPOP.

When RESPLite beats Redis in Docker

Building this project surfaced a clear finding: Redis running inside Docker on the same host often has worse latency than RESPLite running locally. Docker's virtual network adds overhead that disappears when the server runs in the same process/host. For single-node workloads this makes RESPLite the faster, simpler option.

The strongest use case is migrating a non-replicated Redis instance that has grown large (tens of GB). You don't need to manage replicas, AOF, or RDB. Once migrated, you get a single SQLite file and latency that is good enough for most workloads. The built-in migration tooling (see Migration from Redis) handles datasets of that size with minimal downtime.

Benchmark (Redis vs RESPLite)

A typical comparison is Redis (e.g. in Docker) on one side and RESPLite locally on the other. In that setup, RESPLite often shows better latency because it avoids Docker networking and runs in the same process/host. The benchmark below uses RESPLite with the default PRAGMA template only.

Example results (Redis vs RESPLite, default pragma, 10k iterations):

Suite Redis (Docker) RESPLite (default)
PING 8.79K/s 37.36K/s
SET+GET 4.68K/s 11.96K/s
MSET+MGET(10) 4.41K/s 5.81K/s
INCR 9.54K/s 18.97K/s
HSET+HGET 4.40K/s 11.91K/s
HGETALL(50) 8.39K/s 11.01K/s
HLEN(50) 9.36K/s 31.21K/s
SADD+SMEMBERS 9.27K/s 17.37K/s
LPUSH+LRANGE 8.34K/s 14.27K/s
LREM 4.37K/s 6.08K/s
ZADD+ZRANGE 7.80K/s 17.12K/s
SET+DEL 4.39K/s 9.57K/s
FT.SEARCH 8.36K/s 8.22K/s

Run npm run benchmark -- --template default to reproduce. Numbers depend on host and whether Redis is native or in Docker.

How to run:

# Terminal 1: Redis on 6379 (e.g. docker run -p 6379:6379 redis). Terminal 2: RESPLite on 6380
RESPLITE_PORT=6380 npm start

# Terminal 3: run benchmark (Redis=6379, RESPLite=6380 by default)
npm run benchmark

# Only RESPLite with default pragma
npm run benchmark -- --template default

# Custom iterations and ports
npm run benchmark -- --iterations 10000 --redis-port 6379 --resplite-port 6380

Install

npm install resplite

Quick start (standalone server)

npm start

By default the server listens on port 6379 and stores data in data.db in the current directory.

redis-cli -p 6379
> PING
PONG
> SET foo bar
OK
> GET foo
"bar"

Standalone server script (fixed port)

Run this as a persistent background process (node server.js). RESPLite will listen on port 6380 and stay up until the process receives SIGINT (Ctrl+C) or SIGTERM; then it closes the server and exits cleanly. If you kill the process (e.g. SIGKILL or force quit), all client connections are closed as well — with the default configuration the server runs in the same process, so when the process exits the TCP server and its connections are torn down.

// server.js
import { createRESPlite } from 'resplite/embed';

const srv = await createRESPlite({ port: 6380, db: './data.db' });
console.log(`RESPLite listening on ${srv.host}:${srv.port}`);

Then connect from any other script or process:

redis-cli -p 6380 PING

Environment variables

Variable Default Description
RESPLITE_PORT 6379 Server port
RESPLITE_DB ./data.db SQLite database file
RESPLITE_PRAGMA_TEMPLATE default SQLite PRAGMA preset (see below)

PRAGMA templates

Template Description Key settings
default Balanced durability and speed (recommended) WAL, synchronous=NORMAL, 20 MB cache
performance Maximum throughput, reduced crash safety WAL, synchronous=OFF, 64 MB cache, 512 MB mmap, exclusive locking
safety Crash-safe writes at the cost of speed WAL, synchronous=FULL, 20 MB cache
minimal Only WAL + foreign keys WAL, foreign_keys=ON
none No pragmas applied — pure SQLite defaults

Programmatic usage (embedded)

RESPLite can be started and consumed entirely within a single Node.js script — no separate process needed. This is exactly how the test suite works.

Minimal example

import { createClient } from 'redis';
import { createRESPlite } from 'resplite/embed';

const srv = await createRESPlite({ db: './my-app.db' });
const client = createClient({ socket: { port: srv.port, host: '127.0.0.1' } });
await client.connect();

await client.set('hello', 'world');
console.log(await client.get('hello'));  // → "world"

await client.quit();
await srv.close();

Observability (event hooks)

When embedding RESPLite you can pass optional hooks to log unknown commands, command errors, or socket errors (e.g. for warn/error in your logger). The client still receives the same RESP responses; hooks are for observability only.

import pino from 'pino';
const log = pino(); // or your logger

const srv = await createRESPlite({
  port: 6380,
  db: './data.db',
  hooks: {
    onUnknownCommand({ command, argsCount, clientAddress }) {
      log.warn({ command, argsCount, clientAddress }, 'RESPLite: unsupported command');
    },
    onCommandError({ command, error, clientAddress }) {
      log.warn({ command, error, clientAddress }, 'RESPLite: command error');
    },
    onSocketError({ error, clientAddress }) {
      log.error({ err: error, clientAddress }, 'RESPLite: connection error');
    },
  },
});
Hook When it is called
onUnknownCommand Client sent a command not implemented by RESPLite (e.g. SUBSCRIBE, PUBLISH).
onCommandError A command failed (wrong type, invalid args, or handler threw).
onSocketError The connection socket emitted an error (e.g. ECONNRESET).

Migration from Redis

RESPLite is a good fit for migrating non-replicated Redis instances that have grown large (e.g. tens of GB) and where RESPLite's latency is acceptable. The flow (dirty-key tracker, bulk import, cutover) is designed for that scenario with minimal downtime.

Migration supports two modes:

Programmatic migration API (JavaScript)

As an alternative to the CLI, the full migration flow is available as a JavaScript API via resplite/migration. Useful for embedding the migration inside your own scripts or automation pipelines.

import { createMigration } from 'resplite/migration';

const m = createMigration({
  from:  'redis://127.0.0.1:6379',  // source Redis URL (default)
  to:    './resplite.db',           // destination SQLite DB path (required)
  runId: 'my-migration-1',          // unique run ID (required for bulk/status/applyDirty)

  // optional — same defaults as the CLI:
  scanCount:      1000,
  batchKeys:      200,
  batchBytes:     64 * 1024 * 1024,  // 64 MB
  maxRps:         0,                  // 0 = unlimited
  pragmaTemplate: 'default',

  // If your Redis deployment renamed CONFIG for security:
  // configCommand: 'MYCONFIG',
});

// Step 0 — Preflight: inspect Redis before starting
const info = await m.preflight();
console.log('keys (estimate):', info.keyCountEstimate);
console.log('type distribution:', info.typeDistribution);
console.log('notify-keyspace-events:', info.notifyKeyspaceEvents);
console.log('CONFIG available:', info.configCommandAvailable);  // false if renamed
console.log('recommended params:', info.recommended);

// Step 0b — Enable keyspace notifications (required for dirty-key tracking)
// Reads the current value and merges the new flags — existing flags are preserved.
const ks = await m.enableKeyspaceNotifications();
// → { ok: true, previous: '', applied: 'KEA' }
// If CONFIG is renamed and configCommand was not set, ok=false and error explains how to fix it.

// Step 0c — Start dirty tracking (in-process, same script)
await m.startDirtyTracker({
  onProgress: (p) => {
    // one callback per keyspace event tracked during bulk/cutover
    console.log(`[dirty ${p.totalEvents}] event=${p.event} key=${p.key}`);
  },
});

// Step 1 — Bulk import (checkpointed, resumable). Same script to start or continue.
// Use keyCountEstimate from preflight to show progress % (estimate; actual count may change).
const total = info.keyCountEstimate || 1;
await m.bulk({
  resume: true, 
  onProgress: (r) => {
    const pct = total ? ((r.scanned_keys / total) * 100).toFixed(1) : '—';
    console.log(
      `scanned=${r.scanned_keys} migrated=${r.migrated_keys} errors=${r.error_keys} progress=${pct}%`
    );
  },
});

// Check status at any point (synchronous, no Redis needed)
const { run, dirty } = m.status();
console.log('bulk status:', run.status, '— dirty counts:', dirty);

// Step 2 — Apply dirty keys that changed in Redis during bulk
await m.applyDirty();

// Step 2b — Stop tracker after cutover
await m.stopDirtyTracker();

// Step 3 — Verify a sample of keys match between Redis and the destination
const result = await m.verify({ samplePct: 0.5, maxSample: 10000 });
console.log(`verified ${result.sampled} keys — mismatches: ${result.mismatches.length}`);

// Disconnect Redis when done
await m.close();

Bult: Automatic resume (default)
resume defaults to true. It doesn't matter whether it's the first run or a resume: the same script works for both starting and continuing. The first run starts from cursor 0; if the process is interrupted (Ctrl+C, crash, etc.), running the script again continues from the last checkpoint. You don't need to pass resume: false on the first run or change anything to resume.

Graceful shutdown
On SIGINT (Ctrl+C) or SIGTERM, the bulk importer checkpoints progress, sets the run status to aborted, closes the SQLite database cleanly (so WAL is checkpointed and the file is not left open), then exits. You can safely interrupt a long-running bulk and resume later.

The JS API can run the dirty-key tracker in-process via m.startDirtyTracker() / m.stopDirtyTracker(), so the full flow can run from a single script. You can still use npx resplite-dirty-tracker start|stop if you prefer a separate process.

Renamed CONFIG command

If your Redis instance has the CONFIG command renamed (a common hardening practice), pass the new name to createMigration:

const m = createMigration({
  from: 'redis://10.0.0.10:6379',
  to:   './resplite.db',
  runId: 'run_001',
  configCommand: 'MYCONFIG',  // the renamed command
});

// preflight will use MYCONFIG GET notify-keyspace-events
const info = await m.preflight();
// info.configCommandAvailable → false if the name is wrong

// enableKeyspaceNotifications will use MYCONFIG SET notify-keyspace-events KEA
const result = await m.enableKeyspaceNotifications({ value: 'KEA' });

The same flag is available in the CLI:

npx resplite-dirty-tracker start --run-id run_001 --to ./resplite.db \
  --from redis://10.0.0.10:6379 --config-command MYCONFIG

Simple one-shot import

For small datasets or when downtime is acceptable:

# Default: redis://127.0.0.1:6379 → ./data.db
npm run import-from-redis -- --db ./migrated.db

# Custom Redis URL
npm run import-from-redis -- --db ./migrated.db --redis-url redis://127.0.0.1:6379

# Or host/port
npm run import-from-redis -- --db ./migrated.db --host 127.0.0.1 --port 6379

# Optional: PRAGMA template for the target DB
npm run import-from-redis -- --db ./migrated.db --pragma-template performance

Redis with authentication

Migration supports Redis instances protected by a password. Use a Redis URL that includes the password (or username and password for Redis 6+ ACL):

  • Password only: redis://:PASSWORD@host:port
  • Username and password: redis://username:PASSWORD@host:port

Examples:

# One-shot import from authenticated Redis
npm run import-from-redis -- --db ./migrated.db --redis-url "redis://:mysecret@127.0.0.1:6379"

# flow: use --from with the full URL (or set RESPLITE_IMPORT_FROM)
npx resplite-import preflight --from "redis://:mysecret@10.0.0.10:6379" --to ./resplite.db
npx resplite-dirty-tracker start --run-id run_001 --from "redis://:mysecret@10.0.0.10:6379" --to ./resplite.db

For one-shot import, authentication is only available when using --redis-url; the --host / --port options do not support a password.

Search indices (FT.*)
The KV bulk migration imports only the Redis keyspace (strings, hashes, sets, lists, zsets). RediSearch index schemas and documents are migrated separately with the migrate-search step — see Migrating RediSearch indices below.

Minimal-downtime migration

For large datasets (~30 GB), use the Dirty Key Registry flow so the bulk of the migration runs online and only a short cutover is needed.

Enable keyspace notifications in Redis (required for the dirty-key tracker). Either run at runtime:

redis-cli CONFIG SET notify-keyspace-events KEA

Or add to redis.conf and restart Redis:

notify-keyspace-events KEA

(K = keyspace prefix, E = keyevent prefix, A = all event types — lets the tracker see every key change and expiration.)

Renamed CONFIG command? Some Redis deployments rename CONFIG for security. Pass --config-command <name> to the CLI tools, or the configCommand option to the JS API — see below.

  1. Preflight – Check Redis, key count, type distribution, and that keyspace notifications are enabled:

    npx resplite-import preflight --from redis://10.0.0.10:6379 --to ./resplite.db
  2. Start dirty-key tracker – Captures keys modified during bulk (requires notify-keyspace-events in Redis):

    npx resplite-dirty-tracker start --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db
    # If CONFIG was renamed:
    npx resplite-dirty-tracker start --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db --config-command MYCONFIG
  3. Bulk import – SCAN and copy all keys; progress is checkpointed and resumable (resume is default; re-run the same command to continue after a stop):

    npx resplite-import bulk --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db \
      --scan-count 1000 --max-rps 2000 --batch-keys 200 --batch-bytes 64MB
  4. Monitor – Check run and dirty-key counts:

    npx resplite-import status --run-id run_001 --to ./resplite.db
  5. Cutover – Freeze app writes to Redis, then apply remaining dirty keys:

    npx resplite-import apply-dirty --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db
  6. Stop tracker and switch – Stop the tracker and point clients to RespLite:

    npx resplite-dirty-tracker stop --run-id run_001 --to ./resplite.db
  7. Verify – Optional sampling check between Redis and destination:

    npx resplite-import verify --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db --sample 0.5%

Then start RespLite with the migrated DB: RESPLITE_DB=./resplite.db npm start.

Low-level re-exports

If you need more control, the individual functions and registry helpers are also exported:

import {
  runPreflight, runBulkImport, runApplyDirty, runVerify,
  getRun, getDirtyCounts, createRun, setRunStatus, logError,
} from 'resplite/migration';

Strings, TTL, and key operations

// SET with expiration
await client.set('session:abc', JSON.stringify({ user: 'alice' }));
await client.expire('session:abc', 3600);      // expire in 1 hour
console.log(await client.ttl('session:abc'));  // → 3600 (approx)

// Atomic counters
await client.set('visits', '0');
await client.incr('visits');
await client.incrBy('visits', 10);
console.log(await client.get('visits'));       // → "11"

// Multi-key operations
await client.mSet(['k1', 'v1', 'k2', 'v2']);
const values = await client.mGet(['k1', 'k2', 'missing']);
console.log(values);  // → ["v1", "v2", null]

// Key existence and deletion
console.log(await client.exists('k1'));        // → 1
await client.del('k1');
console.log(await client.exists('k1'));        // → 0

Hashes

await client.hSet('user:1', { name: 'Martin', age: '42', city: 'BCN' });

console.log(await client.hGet('user:1', 'name'));     // → "Martin"

const user = await client.hGetAll('user:1');
console.log(user);  // → { name: "Martin", age: "42", city: "BCN" }

await client.hIncrBy('user:1', 'age', 1);
console.log(await client.hGet('user:1', 'age'));      // → "43"

console.log(await client.hExists('user:1', 'email')); // → false

Sets

await client.sAdd('tags', ['node', 'sqlite', 'redis']);
console.log(await client.sMembers('tags'));           // → ["node", "sqlite", "redis"]
console.log(await client.sIsMember('tags', 'node'));  // → true
console.log(await client.sCard('tags'));              // → 3

await client.sRem('tags', 'redis');
console.log(await client.sCard('tags'));              // → 2

Lists

await client.lPush('queue', ['c', 'b', 'a']);      // push left: a, b, c
await client.rPush('queue', ['d', 'e']);           // push right: d, e

console.log(await client.lLen('queue'));           // → 5
console.log(await client.lRange('queue', 0, -1));  // → ["a", "b", "c", "d", "e"]
console.log(await client.lIndex('queue', 0));      // → "a"

console.log(await client.lPop('queue'));           // → "a"
console.log(await client.rPop('queue'));           // → "e"

Blocking list commands (BLPOP / BRPOP)

BLPOP and BRPOP block until an element is available or a timeout (seconds) is reached. Use them for simple queues or coordination between producers and consumers.

// Consumer: block up to 10 seconds for an element from "tasks" or "fallback"
const result = await client.blPop(['tasks', 'fallback'], 10);
// result is { key: 'tasks', element: 'item1' } or null on timeout

// Producer (e.g. another client or process)
await client.rPush('tasks', 'item1');
  • Timeout: 0 = block indefinitely; > 0 = block up to that many seconds.
  • Return: { key, element } on success, or null on timeout.
  • Multi-key: Keys are checked in order; the first key that has an element wins. One push wakes at most one blocked client (FIFO per key).

Sorted sets

await client.zAdd('leaderboard', [
  { score: 100, value: 'alice' },
  { score: 250, value: 'bob' },
  { score: 175, value: 'carol' },
]);

console.log(await client.zCard('leaderboard'));                // → 3
console.log(await client.zScore('leaderboard', 'bob'));        // → 250
console.log(await client.zRange('leaderboard', 0, -1));        // → ["alice", "carol", "bob"]
console.log(await client.zRangeByScore('leaderboard', 100, 200)); // → ["alice", "carol"]

Full-text search (RediSearch-like)

// Create an index
await client.sendCommand(['FT.CREATE', 'articles', 'SCHEMA', 'payload', 'TEXT']);

// Add documents
await client.sendCommand([
  'FT.ADD', 'articles', 'doc:1', '1', 'REPLACE', 'FIELDS',
  'payload', 'Introduction to SQLite full-text search'
]);
await client.sendCommand([
  'FT.ADD', 'articles', 'doc:2', '1', 'REPLACE', 'FIELDS',
  'payload', 'Building a Redis-compatible server in Node.js'
]);

// Search
const results = await client.sendCommand([
  'FT.SEARCH', 'articles', 'SQLite', 'NOCONTENT', 'LIMIT', '0', '10'
]);
console.log(results);  // → [1, "doc:1"]  (count + matching doc IDs)

// Autocomplete suggestions
await client.sendCommand(['FT.SUGADD', 'articles', 'sqlite full-text', '10']);
await client.sendCommand(['FT.SUGADD', 'articles', 'sqlite indexing', '5']);
const suggestions = await client.sendCommand(['FT.SUGGET', 'articles', 'sqlite']);
console.log(suggestions);  // → ["sqlite full-text", "sqlite indexing"]

Introspection and admin

// Scan keys (cursor-based)
const scanResult = await client.scan(0);
console.log(scanResult);  // → { cursor: 0, keys: [...] }

// Key type
console.log(await client.type('user:1'));  // → "hash"

// Admin commands (via sendCommand)
const sqliteInfo = await client.sendCommand(['SQLITE.INFO']);
const cacheInfo  = await client.sendCommand(['CACHE.INFO']);
const memInfo    = await client.sendCommand(['MEMORY.INFO']);

Data persists across restarts

import { createClient } from 'redis';
import { createRESPlite } from 'resplite/embed';

const DB_PATH = './persistent.db';

// --- First session: write data ---
const srv1 = await createRESPlite({ db: DB_PATH });
const c1 = createClient({ socket: { port: srv1.port, host: '127.0.0.1' } });
await c1.connect();
await c1.set('persistent_key', 'survives restart');
await c1.hSet('user:1', { name: 'Alice' });
await c1.quit();
await srv1.close();

// --- Second session: data is still there ---
const srv2 = await createRESPlite({ db: DB_PATH });
const c2 = createClient({ socket: { port: srv2.port, host: '127.0.0.1' } });
await c2.connect();
console.log(await c2.get('persistent_key'));     // → "survives restart"
console.log(await c2.hGet('user:1', 'name'));    // → "Alice"
await c2.quit();
await srv2.close();

Migrating RediSearch indices

If your Redis source uses RediSearch (Redis Stack or the redis/search module), run migrate-search after (or during) the KV bulk import. It reads index schemas with FT.INFO, creates them in RespLite, and imports documents by scanning the matching hash keys.

CLI:

# Migrate all indices
npx resplite-import migrate-search \
  --from redis://10.0.0.10:6379 \
  --to   ./resplite.db

# Migrate specific indices only
npx resplite-import migrate-search \
  --from redis://10.0.0.10:6379 \
  --to   ./resplite.db \
  --index products \
  --index articles

# Options
#   --scan-count N          SCAN COUNT hint (default 500)
#   --max-rps N             throttle Redis reads
#   --batch-docs N          docs per SQLite transaction (default 200)
#   --max-suggestions N     cap for suggestion import (default 10000)
#   --no-skip               overwrite if the index already exists in RespLite
#   --no-suggestions        skip suggestion import

Programmatic API:

const m = createMigration({ from, to, runId });

const result = await m.migrateSearch({
  onlyIndices:     ['products', 'articles'], // omit to migrate all
  batchDocs:       200,
  maxSuggestions:  10000,
  skipExisting:    true,   // default
  withSuggestions: true,   // default
  onProgress: (r) => console.log(r.name, r.docsImported, r.warnings),
});
// result.indices: [{ name, created, skipped, docsImported, docsSkipped, docErrors, sugsImported, warnings, error? }]
// result.aborted: true if interrupted by SIGINT/SIGTERM

What gets migrated:

RediSearch type RespLite Notes
TEXT TEXT Direct
TAG TEXT Values preserved; TAG filtering lost
NUMERIC TEXT Stored as string; numeric range queries not supported
GEO, VECTOR, … skipped Warning emitted per field
  • Only HASH-based indices are supported. JSON (RedisJSON) indices are skipped.
  • A payload field is added automatically if none of the source fields maps to it.
  • Suggestions are imported via FT.SUGGET "" MAX n WITHSCORES (no cursor; capped at maxSuggestions).
  • Graceful shutdown: Ctrl+C finishes the current document, closes SQLite cleanly, and exits with a non-zero code.

Compatibility matrix

Supported (v1)

Category Commands
Connection PING, ECHO, QUIT
Strings GET, SET, MGET, MSET, DEL, EXISTS, INCR, DECR, INCRBY, DECRBY
TTL EXPIRE, PEXPIRE, TTL, PTTL, PERSIST
Hashes HSET, HGET, HMGET, HGETALL, HDEL, HEXISTS, HINCRBY
Sets SADD, SREM, SMEMBERS, SISMEMBER, SCARD
Lists LPUSH, RPUSH, LLEN, LRANGE, LINDEX, LPOP, RPOP, BLPOP, BRPOP
Sorted sets ZADD, ZREM, ZCARD, ZSCORE, ZRANGE, ZRANGEBYSCORE
Search (FT.*) FT.CREATE, FT.INFO, FT.ADD, FT.DEL, FT.SEARCH, FT.SUGADD, FT.SUGGET, FT.SUGDEL
Introspection TYPE, SCAN, KEYS, MONITOR
Admin SQLITE.INFO, CACHE.INFO, MEMORY.INFO
Tooling Redis import CLI (see Migration from Redis)

Not supported (v1)

  • Pub/Sub (SUBSCRIBE, PUBLISH, etc.)
  • Streams (XADD, XRANGE, etc.)
  • Lua (EVAL, EVALSHA)
  • Transactions (MULTI, EXEC, WATCH)
  • BRPOPLPUSH, BLMOVE (blocking list moves)
  • SELECT (multiple logical DBs)

Unsupported commands return: ERR command not supported yet.

Scripts

Script Description
npm start Run the server
npm test Run all tests
npm run test:unit Unit tests
npm run test:integration Integration tests
npm run test:contract Contract tests (redis client)
npm run test:stress Stress tests
npm run benchmark Comparative benchmark Redis vs RESPLite
npm run import-from-redis One-shot import from Redis into a SQLite DB
npx resplite-import (preflight, bulk, status, apply-dirty, verify) Migration CLI (minimal-downtime flow)
npx resplite-dirty-tracker <start|stop> Dirty-key tracker for migration cutover

Specification

See SPEC.md for the full specification.