Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@db2lake/driver-databricks) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
@db2lake/driver-databricks
High-performance Databricks destination driver for @db2lake. It writes batches of records to a Databricks SQL table using the @databricks/sql SDK and supports optional table creation, transactions with retries, and configurable batching.
Note: This driver depends on the @databricks/sql package at runtime.
Install
npm install @db2lake/driver-databricksProject structure
├── src/
│ └── index.ts # DatabricksDestinationDriver implementation
│ └── type.ts # DatabricksConfig and related types
└── package.json # Package metadataQuick usage
import { DatabricksDestinationDriver, DatabricksConfig } from '@db2lake/driver-databricks';
const config: DatabricksConfig = {
connection: { host: 'workspace.cloud.databricks.com', path: '/sql/1.0/warehouses/xxx', token: process.env.DATABRICKS_TOKEN! },
database: 'my_db',
table: 'my_table',
batchSize: 1000,
transaction: { enabled: true, maxRetries: 3 }
};
const driver = new DatabricksDestinationDriver<{ name: string; age: number }>(config);
try {
await driver.insert([{ name: 'John', age: 30 }]);
// Inserts are buffered and flushed when batchSize is reached or on close()
} finally {
await driver.close();
}Create table with schema
If you want the driver to create the target table automatically, pass createTableOptions in the config. The driver will generate and execute a CREATE TABLE IF NOT EXISTS statement using the provided schema, optional properties, and comment.
Config summary
connection-{ host, path, token }for the Databricks SQL warehousedatabase- target database/catalog nametable- target table namecreateTableOptions?-{ schema: DatabricksColumn[], properties?: Record<string,string>, comment?: string }writeMode?-'append' | 'overwrite'(default:append)batchSize?- flush threshold (default: 1000)transaction?-{ enabled?: boolean, maxRetries?: number }(default enabled=true, maxRetries=3)
Best practices
- Always call
close()to flush pending rows and release connections. - Tune
batchSizeto balance performance and memory use. - Use
createTableOptionsto automate schema management in CI or initial runs.
License
MIT