Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (supabee) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Supabee
Orchestrate local Supabase schema/data workflows: split giant SQL dumps into organized files, and apply post-seed migrations in a production-like order.
Why?
Supabase workflows often end up with two pain points:
- One huge dump file (
supabase db dump/--data-only) that's painful to review, edit, or selectively seed from. - Local reset/start ordering (migrations → seeds) that can diverge from production deploys (new migrations applied onto an already-populated database).
supabee addresses both:
- Split + validate dumps: split schema and data dumps into focused files (by category / by table), then reconstruct and validate round-trip (PR-friendly diffs, easier navigation, and smaller merge conflicts).
- Defer post-seed migrations: temporarily move newer migrations out of the way for
supabase db reset/supabase start, then restore + reapply them after seeds load.
Use cases
- Seed data you can control: keep per-table seed files and point
[db.seed].sql_pathsat only the ones you want. - Schema as docs / source of truth: keep schema readable in-repo (tables, functions, RLS, permissions, etc.).
- Mimic production locally: catch “works on reset” vs “works on deploy” issues by applying post-seed migrations after data exists.
- One-liners with validation:
sync schema/sync datarun dump → split → reconstruct → validate.
Repo hygiene (recommended)
Commit split outputs (for example supabase/schemas/split/** and supabase/seeds/split/**), and ignore large generated artifacts in your repo:
# raw dumps (generated from prod; optional to keep locally)
supabase/schemas/prod-schemas.sql
supabase/seeds/prod-data.sql
# reconstructed outputs (validation artifacts)
supabase/schemas/reconstructed-schemas.sql
supabase/seeds/reconstructed-data.sqlPrerequisites
- Node.js >= 18
- Supabase CLI installed and authenticated
Install
Global install (recommended):
npm i -g supabee
pnpm add -g supabee
bun add -g supabeeOne-off run without global install:
npx supabee --help
pnpm dlx supabee --help
bunx supabee --helpProject-local install (optional):
npm install --save-dev supabeeSetup
1. Initialize config
Run init to generate supabee.config.json (if it doesn't exist) and update supabase/config.toml seed paths:
supabee initReview the generated supabee.config.json and adjust paths/limits for your project.
2. Link your Supabase project
If you haven't already, link your local repo to your Supabase project. This is required before you can dump schema or data:
supabase linkYou'll be prompted for your project ref and database password. See the Supabase CLI docs for details.
3. Run the primary workflows
supabee sync schema
supabee sync data
supabee db reset [cutoff_timestamp]
supabee start [cutoff_timestamp]sync commands run end-to-end:
- schema:
supabase db dump-> split -> reconstruct -> validate - data:
supabase db dump --data-only-> split -> reconstruct -> validate
Selective seeding example (optional)
By default, supabee init configures supabase/config.toml to load all split seed files (for example ./seeds/split/*.sql).
To seed only a subset, replace [db.seed].sql_paths with an explicit ordered list (keep 001_setup.sql and 999_cleanup.sql; add the generated *_sequences.sql file if you need sequence values):
[db.seed]
sql_paths = [
"./seeds/split/001_setup.sql",
"./seeds/split/002_public_users.sql",
"./seeds/split/003_public_projects.sql",
"./seeds/split/999_cleanup.sql",
]Commands
init
Creates supabee.config.json if missing, then updates supabase/config.toml [db.seed].sql_paths so Supabase knows where to find your split seed files.
supabee initschema
Processes an existing schema dump into categorized folders:
supabase/schemas/split/
├── 00_extensions/
├── 01_setup/
├── 02_types/
├── 03_functions/
├── 04_tables/
├── 05_views/
├── 06_constraints/
├── 07_indexes/
├── 08_foreign_keys/
├── 09_rls/
├── 10_permissions/
├── 11_ownership/
└── 12_others/# Full chain (split → reconstruct → validate)
supabee schema
# Individual steps
supabee schema split
supabee schema reconstruct
supabee schema validatedata
Processes an existing data dump into per-table files with configurable row/statement limits:
# Full chain (split → reconstruct → validate)
supabee data
# Individual steps
supabee data split
supabee data reconstruct
supabee data validatesync schema
Dumps schema from the linked Supabase project, then runs full schema processing:
supabee sync schema
supabee sync schema --input supabase/schemas/prod-schemas.sql --output supabase/schemas/split
supabee sync schema --backupsync data
Dumps data (--data-only) from the linked Supabase project, then runs full data processing:
supabee sync data
supabee sync data --input supabase/seeds/prod-data.sql --output supabase/seeds/split
supabee sync data --backup
supabee sync data --no-backupdb reset
Defers post-seed migrations newer than the cutoff timestamp, runs supabase db reset, restores deferred migrations, then reapplies them.
If [cutoff_timestamp] is omitted, supabee auto-detects it from supabase migration list --linked by taking the latest migration version that exists in both local and remote (works even when remote has gaps).
When linked lookup succeeds, supabee stores the value in supabee.config.json as postSeedCutoff.
If linked lookup fails (for example in CI), it falls back to postSeedCutoff from config.
If not linked, supabee runs supabase link and retries once.
# default re-apply mode: supabase migration up
supabee db reset 20260309180959
supabee db reset
# optional re-apply mode: psql
supabee db reset 20260309180959 --psqlstart
Defers post-seed migrations newer than the cutoff timestamp, runs supabase start, restores deferred migrations, then reapplies them.
If [cutoff_timestamp] is omitted, supabee auto-detects it from supabase migration list --linked the same way as db reset.
# explicit cutoff
supabee start 20260309180959
# auto cutoff from linked migration alignment
supabee start
# optional re-apply mode: psql
supabee start --psqlSupabase passthrough
Unknown commands are forwarded to Supabase CLI:
supabee migration up # forwards to: supabase migration up
supabee db dump # forwards to: supabase db dumpOverriding paths
schema, data, and sync commands accept --input and --output flags:
supabee schema split --input path/to/schema.sql --output path/to/split
supabee schema split --input path/to/schema.sql --output path/to/split --backup
supabee data split --input path/to/data.sql --output path/to/split
supabee data split --input path/to/data.sql --output path/to/split --no-backupBy default, split operations replace existing output in-place (while preserving configured keepFiles) without creating a backup folder.
Use --backup to keep a timestamped backup before replacement.
Configuration
supabee reads supabee.config.json from your project root.
Precedence: CLI flags > config file > built-in defaults.
If the config file is missing, built-in defaults are used. Run supabee init to generate one.
Legacy support: supabase-splitter.config.json is still recognized, but supabee.config.json is preferred.
Full config reference
{
"postSeedCutoff": "",
"schema": {
"input": "supabase/schemas/prod-schemas.sql",
"output": "supabase/schemas/split",
"reconstructed": "supabase/schemas/reconstructed-schemas.sql",
"backup": false,
"keepFiles": []
},
"data": {
"input": "supabase/seeds/prod-data.sql",
"output": "supabase/seeds/split",
"reconstructed": "supabase/seeds/reconstructed-data.sql",
"backup": false,
"maxLinesPerFile": 2000,
"maxStatementsPerFile": 20,
"maxRowsPerInsert": 200,
"tableRules": {},
"keepFiles": [],
"ignoreInReconstruct": []
},
"init": {
"seedSqlPaths": ["./seeds/split/*.sql"]
}
}| Key | Description |
|---|---|
schema.input |
Path to your schema dump file |
schema.output |
Directory for split schema files |
schema.reconstructed |
Path for the reconstructed schema (used in validation) |
schema.backup |
Whether split should create backup folder before replacing output (default: false) |
schema.keepFiles |
Files in the split dir to preserve across re-splits |
data.input |
Path to your data dump file |
data.output |
Directory for split data files |
data.reconstructed |
Path for the reconstructed data (used in validation) |
data.backup |
Whether split should create backup folder before replacing output (default: false) |
data.maxLinesPerFile |
Max lines per split file (default: 2000) |
data.maxStatementsPerFile |
Max INSERT statements per file (default: 20) |
data.maxRowsPerInsert |
Max rows per INSERT statement (default: 200) |
data.tableRules |
Per-table overrides (see below) |
data.keepFiles |
Files in the split dir to preserve across re-splits |
data.ignoreInReconstruct |
Files to skip during reconstruction |
init.seedSqlPaths |
Paths written to supabase/config.toml [db.seed].sql_paths |
postSeedCutoff |
Fallback cutoff timestamp used by db reset/start when linked lookup is unavailable (for example in CI) |
Table-specific rules
Override limits or skip specific tables:
{
"data": {
"tableRules": {
"public.cities": {
"maxLinesPerFile": 800,
"maxStatementsPerFile": 8,
"maxRowsPerInsert": 80
},
"public.audit_logs": {
"skip": true
}
}
}
}Flags
schema, data, sync schema, and sync data support:
--input: source SQL file--output: output path (split dir forsplit, reconstructed file forreconstruct/validate)--backup: create backup of dirty split directory before running split--no-backup: disable backup of dirty split directory before running split
For validate, you can pass reconstructed path either as --output <path> or as the second positional argument.
db reset supports:
--psql: apply deferred migrations viapsqlinstead ofsupabase migration up--migrations-dir <path>: override migrations directory (defaultsupabase/migrations)--temp-dir <path>: override temporary defer directory (defaultsupabase/.tmp-migrations)
start supports:
--psql: apply deferred migrations viapsqlinstead ofsupabase migration up--migrations-dir <path>: override migrations directory (defaultsupabase/migrations)--temp-dir <path>: override temporary defer directory (defaultsupabase/.tmp-migrations)
Deep dive: why supabee db reset and supabee start
The short version is in the Why? section above. These commands matter most when local replay order diverges from how production data actually evolved:
- Seed files may be shaped for pre-migration schema.
- Some migrations intentionally mutate/seed production data for traceability (for example RBAC rows).
- Local
migrations -> seedreplay can fail even when production worked on already-populated data.
By deferring post-seed migrations and applying them after seed load, supabee better matches this production-style path.
Migration + Seed Duplication Caveat
If the same logical data mutation exists in both migration SQL and seed files, local replay can become order-dependent and brittle.
Typical symptoms:
- enum/value already exists errors,
- duplicate key or constraint violations,
- reset/start-only failures that don’t appear on incremental production deploys.
Recommended approach:
- Keep schema structure changes in migrations.
- Keep baseline/reference seed rows in seed files.
- Make migration-time data mutations idempotent (
IF NOT EXISTS,ON CONFLICT DO NOTHING, guarded updates). - Avoid duplicating the exact same inserts/enum mutations in both seeds and migrations unless both paths are explicitly idempotent.
Help
supabee --help
supabee init --help
supabee sync --help
supabee sync schema --help
supabee sync data --help
supabee schema --help
supabee data --help
supabee start --help
supabee db --help
supabee db reset --helpLegacy CLI alias is still available: supabase-splitter --help.
Development
npm install
npm run typecheck
npm run build
npm run test
npm run pack:checkRC gate checklist: docs/rc-checklist.md