Package Exports
- exabase
- exabase/lib/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (exabase) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
Exabase
A scaling focused distributed nosql database with surprising performance and strong data consistency.
Explore Everst APIs »
Join Community
.
Report Bug
.
Request Feature
Exabase is a distributed database infrastucture, with an ACID complaint standard, auto scaling, backup and recovery. and strong data consistency.
Distributed & Performant Database for server-side Javascript Runtimes.
--
Rationale
Exabase as a scalable nosql database supports these features:
- ACID Compliant transactions.
- Batch transactions, used for performing large writes (INSERT, DELETE, UPDATE) as a single atomic operation.
- Inbuilt distrubtion interface (Rings).
- Strong data consistency across all Exabase Rings.
- Security setup out of the box with jsonwebtokens.
- Stand alone and in-app usage through exposed ring endpoints and inbuilt http app.
- Exposes an optional inbuilt highly perfomant http app interface through Rings.
- Easy backup system and recovery.
- A powerful client-side Exabase library (ExabaseStore), with theorical load-balanced distribution across all connected rings.
- Client-side administration interface.
- A strong backup community moved with passion for making the web better.
Some other unique benefits of Exabase is it's surpricing performance, honestly, we never knew it could happen!
In Exabase, unlike cassadra or other query based DBMS, Exabase is designed as a light and but powerful DBMS, using the an intuitive schema and query API design, a Client-side ExaStore library and offline ExaCore Admin Panel software you can be able to query and manage your app's data as an easy peasy grease.
This benefits are very essential and delighting.
--
How Exabase works
Exabase achives a high degree of efficiency and strong level scalability by employing the following techniques qualitatively.
Seperation of concerns mechanism across schema tables. This allows for more efficiency by keeping each schema managers in it own space and process.
Exabase uses the most efficient storage mechanism which includes message-pack's serialisation and linux based archiving via nodejs processes.
Exabase integrates an extensive use of Binary search algorithms and custom Binary inset algorimths, allowing for sorted storage and efficient query of data.
Strong data consistency across all Exabase Rings is achieved using The Dynamo db model, in which is consistency is achived when write operations is confirmed to have persisted to a some number of replicas before repsonding to the client.
Security is handled using jsonwebtokens allowing each Exabase instance to communicate to the Exabase Rings interface securedly and also allows your apps to communicate with Exabse in same manner.
Exabase employs log file managers and handles log file resizing to makeup for efficient memory usage, log files are totally resizeable and durable.
Consistency and Durability in log files and other very important files is achieved through an ACID complaint data processing mechanism which is optimised for crash recovery and consistency checks out of the box.
Exabase transactions are grounded in a strong atomic and isolated model, using a WAL (write ahead log) mechanism that achives faster write transactions and efficient data consistency across reads to Exabase schema log files, this allows for strong data consistency and durability within the Exabase DBMS infrastructure.
Exabase employs Ring to Ring hydration to setup and sanitaze new instances of Exabase and allows the new instance to join the shared Exabase Rings interface.
Exabase achives an efficient search query standard using search field indexing, you decide what fields of a schema gets indexed for search in your column options, doing so allows Exabase to focus on what's neccessary and reduce unneccessary costs in realtime.
A Linux backup based backup functionality you can call in your app to get a single uploadable zip. You can call it periodically as per your needs.
Requirements to use Exabase.
Some very few Exabase functionality are dependent on the linux os such a backup with uses GNU/Linux zip utils available via node child-processes, for development purples Exabase can fit any os as far the backup functionality is not requested.
Exabase support all server-side Javascriot runtimes:
- Nodejs.
- Denojs.
- Bunjs.
- Edge support for runtimes like cloudflare workers (in view).
Exabase Memory and storage requirements
There's no hard rule here, if any Javascript runtime can work fine then Exabase can work fine.
Exabase does adjusts it memory based RCT (Exabase Regularity Cache Tank) cache usage to 10% by default which is very okay.
But you can increase to as perferably as 20%, you shouldn't go past 40% if your app is CPU and memory intensive, and the default is best in most cases, however the more RCT space the faster your app Reads operation will be.
How to get started with Exabase DBMS.
Installation
Install Exabase Right away on your project using npm or Javascript other package managers.
npm i exabase --save
Usage
Exabase is for the cave man, it has carefully designed APIs that allows you to make the most actions against your databse in a very intuitive way.
When improvements and changes rolls out, we will quickly update this page and the currently prepared web documentation.
We intend to move with less traction and have implemented many of the best decisions and designs we can think-of/research right from the beginning.
The Exabase class API
The Exabase
class accepts an object argument with the following options:
Options
Property | Required | Type | Details |
---|---|---|---|
name | false | string | The folder to persist data into defaults to Exabase . |
schemas | false | SchemaType[] | An array of defined schema instances |
EXABASE_SECRET | false | string | A secret for authorising data access |
EXABASE_MEMORY_PERCENT | false | number | RCT Memory cache percentage |
EXABASE_STORAGE_PERCENT | false | number | storage percentage |
username | false | string | storage percentage |
password | false | string | storage percentage |
ringbearers | false | string[] | list of urls that points to other nodes in this node's ring |
port | false | number | storage percentage defaults to port: 8080. |
mode | false | number | The mode of this node when joining the shared ring interface default is REPLICATION. REPLICATION: will be a replica instance. EXTENSION: will be an extension of a ring bearer data with a defined extension level. auto hydration from a ring bearer. |
extension_level | false | number | The level of extension this node is set to handle default: |
Transaction Class Methods
.find
find(FindOptions?: {
where?: Record<string, any>;
limit?: number;
offset?: number;
relationships?: string[];
}): Promise<unknown>;
used to select one or many and able to populate them.
await trx.find();
.save
save(data: Record<string, any>): Promise<unknown>;
can be use to create a new record and update an existing record when it has an_id field.
await trx.save({ name: "cave man" });
.delete
delete(DeleteOptions: {
_id: string;
}): Promise<unknown>;
Used to remove a record
await await trx.delete(12);
.count
count(): Promise<unknown>;
Checks to see if a single key exists.
await trx.count();
.Batch
Batch: Batch;
Checks to see if a single key exists.
await trx.Batch;
.BatchBatch
BatchBatch: BatchBatch;
Creates a write batch, used for performing multiple writes as a single atomic operation. The maximum number of writes allowed in a single WriteBatch is up to you.
const bq = trx.BatchedBatch;
// ? load an array of data to insert
await bq.LOAD([
{
FirstName: "George",
LastName: "Dods I",
},
{
FirstName: "woods",
LastName: "Dods II",
},
{
FirstName: "Annie",
LastName: "Dods III",
},
]).INSERT;
// ? execute the query
await bq.EXEC;
BatchedBatch class methods
- .LOAD(array) // loads the data
- .INSERT // set up the previous load array to be for insert
- .DELETE // set up the previous load array to be for delete
- .UPDATE // set up the previous load array to be for update
- .EXEC // executes the entire query
Support multiple loads of different query types
e.g
await bq.load([...]).INSERT
await bq.load([...]).DELETE
await bq.load([...]).UPDATE
// then finally
await bq.load([...]).EXEC
.Flush
Flush: Promise<void>;
Checks to see if a single key exists.
await trx.Flush;
.addRelation
addRelation(options: {
_id: string;
foreign_id: string;
relationship: string;
}): Promise<unknown>;
Checks to see if a single key exists.
await trx.addRelation();
.removeRelation
removeRelation(options: {
_id: string;
foreign_id: string;
relationship: string;
}): Promise<unknown>;
Checks to see if a single key exists.
await trx.removeRelation();
A Basic Database setup
import Exabase, { Schema } from "exabase";
//? creating a schema
const Metadata = new Schema({
tableName: "Metadata",
columns: {
info: {
type: String,
default: "A nice book for all readers",
nullable: true,
},
},
});
const Books = new Schema({
tableName: "Books",
columns: {
title: { type: String },
},
relationship: {
info: {
target: Metadata,
type: "MANY",
},
},
});
// ? Initialising Exabase
const AMAZON = new Exabase({
name: "amazon-books",
schemas: [Books, Metadata],
EXABASE_SECRET: "process.env.EXABASE_SECRET",
});
// ? get Exabase ready
await AMAZON.Ready;
// ? get transactions of your shemas
const bookTRX = AMAZON.getTransaction(Books);
const metaTRX = AMAZON.getTransaction(Metadata);
// ? insert a new record
const book1 = await bookTRX.save({
title: "the book of the cave men.",
});
// ? using the query builder
const book2 = await bookTRX.Batch.INSERT({
title: "the book of the cave men part 2.",
}).EXEC;
// ? search for a record with query builder.
let books = await bookTRX.Batch.SEARCH({
title: "the book of the cave men.",
})
.POPULATE(["info"])
.OFFSET(3)
.LIMIT(1).EXEC;
// ? creating a new metadata
await metaTRX.Batch.INSERT({}).EXEC;
// ? immediate send the commit buffer to the read stream
// ? we do this because we need immediate access the it below
await metaTRX.Flush;
// / select all the metadata and each as a relation to book
const all_metas = await metaTRX.Batch.SELECT("*").EXEC;
for (let i = 0; i < all_metas.length; i++) {
const element = all_metas[i];
await bookTRX.addRelation({
_id: book1._id,
foreign_id: element._id,
relationship: "info",
});
}
//?
books = await bookTRX.Batch.SEARCH({
title: "the book of the cave men.",
})
.POPULATE()
// .OFFSET(3)
.LIMIT(1).EXEC;
console.log(books);
console.log(books[0]._foreign.info);
await AMAZON.connectRing({
source: "./tests/http",
});
Benchmarks
This benchmark is Exabase againt sqlite.
Sqlite is as we know it, has a tiny footprint and off course really great performance with pure acidity and a relational design.
We are trilled Exabase performs really well and exceedily beats sqlite.
And with this confidence we have and encourage everyone to try Exabase for themselves.
# without the Exabase RCT cache
cpu: Intel(R) Celeron(R) CPU 4205U @ 1.80GHz
runtime: bun 1.0.0 (x64-linux)
benchmark time (avg) (min … max) p75 p99 p995
---
SELECT _ FROM "Employee" Exabase 155.28 µs/iter (110.63 µs … 5.41 ms) 148.79 µs 645.66 µs 1.25 ms
SELECT _ FROM "Employee" sqlite 259.3 µs/iter (190.6 µs … 3 ms) 265.46 µs 1.09 ms 1.18 ms
1.7x faster
# with the Exabase RCT cache
cpu: Intel(R) Celeron(R) CPU 4205U @ 1.80GHz
runtime: bun 1.0.0 (x64-linux)
benchmark time (avg) (min … max) p75 p99 p995
---
SELECT _ FROM "Employee" Exabase 2.42 µs/iter (1.91 µs … 7.08 ms) 2.18 µs 5.07 µs 6.4 µs
SELECT _ FROM "Employee" sqlite 270.73 µs/iter (187.72 µs … 3.24 ms) 267.24 µs 1.19 ms 1.48 ms
112X faster
# Does the RCT cache however destroy performance? no.
Data in Exabase - 10072
Data in sqlite - 9
cpu: Intel(R) Celeron(R) CPU 4205U @ 1.80GHz
runtime: bun 1.0.0 (x64-linux)
benchmark time (avg) (min … max) p75 p99 p995
---
SELECT _ FROM "Employee" Exabase 6.57 µs/iter (4.65 µs … 13.42 ms) 5.48 µs 16.96 µs 25.02 µs
SELECT _ FROM "Employee" sqlite 324.54 µs/iter (189.04 µs … 52.11 ms) 271.36 µs 1.59 ms 2.03 ms
Regularity Cache Tank
The Regularity Cache Tank or RCT is a basic LOG file level cache. this means it stores the entire LOG(n) file of the table in memory, where n is the last active LOG file.
This might not be okay for resource heavy workloads. hence it can be turned off per schema.
Exabase peripherals
ExaCore
ExaCore is an administractive Panel that connects to, monitors and executes operations on your Exabase Rings interface, it is offline hosted by default. Get ExaCore
ExabaseStore
A client-side library for communication with your Exabase rings interface, does a high level theoritical load-balancing out of the box. Learn more about ExaStore
ExaRouter
A javascript-based http router extracted from Exabase rings, extracted for use in stand alone apps.
People love the innovation we achieved when we customised how routes should conform to function names, this router also uses the Cradova route matcher logic to get routes faster without using any regex. this is as fast as it could get. Learn more about ExaRouter.
It's super interesting what has been achieved so far.
MIT Lincenced
Opensourced And Free.
Uiedbook is an open source team of web focused engineers, their vision is to make the web better, improving and innovating infrastructures for a better web experience.
You can Join the Uiedbook outsiders group on telegram. Ask your questions and become a team-member/contributor by becoming an insider.
Contributing to Exabase development
Your contribution(s) is a good force for change anytime you do it, you can ensure Exabase growth and improvement by contributing a re-occuring or fixed donations to our teams donation handles: stripe, etheruen: 09839848uhehfbh, Bitcoin: 09i30932u9.