Package Exports
- rust-kgdb
- rust-kgdb/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (rust-kgdb) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
rust-kgdb
Production-ready RDF/hypergraph database with GraphFrames analytics, vector embeddings, Datalog reasoning, and Pregel BSP processing.
v0.3.0 - Major Feature Release: GraphFrames, EmbeddingService, DatalogProgram, Pregel, Hypergraph
🎯 Features Overview
| Feature | Description |
|---|---|
| GraphDB | Core RDF/SPARQL database with 100% W3C compliance |
| GraphFrames | Spark-compatible graph analytics (PageRank, triangles, components) |
| Motif Finding | Graph pattern DSL for structural queries (fraud rings, recommendations) |
| EmbeddingService | Vector similarity search, text search, multi-provider embeddings |
| Embedding Triggers | Automatic embedding generation on INSERT/UPDATE/DELETE |
| Embedding Providers | OpenAI, Voyage, Cohere, Anthropic, Mistral, Jina, Ollama, HF-TEI |
| DatalogProgram | Rule-based reasoning with transitive closure |
| Pregel | Bulk Synchronous Parallel graph processing |
| Hypergraph | Native hyperedge support beyond RDF triples |
| Factory Functions | Pre-built graph generators for testing |
Installation
npm install rust-kgdbComplete API Examples
1. Core GraphDB (RDF/SPARQL)
const { GraphDB, getVersion } = require('rust-kgdb')
console.log(`rust-kgdb v${getVersion()}`)
// Create database with base URI
const db = new GraphDB('http://example.org/my-app')
// Load RDF data (N-Triples format)
db.loadTtl(`
<http://example.org/alice> <http://xmlns.com/foaf/0.1/name> "Alice" .
<http://example.org/alice> <http://xmlns.com/foaf/0.1/age> "28"^^<http://www.w3.org/2001/XMLSchema#integer> .
<http://example.org/bob> <http://xmlns.com/foaf/0.1/name> "Bob" .
<http://example.org/alice> <http://xmlns.com/foaf/0.1/knows> <http://example.org/bob> .
`, null)
// SPARQL SELECT query
const results = db.querySelect('SELECT ?name WHERE { ?person <http://xmlns.com/foaf/0.1/name> ?name }')
console.log('Names:', results.map(r => r.bindings.name))
// SPARQL ASK query
const hasAlice = db.queryAsk('ASK { <http://example.org/alice> ?p ?o }')
console.log('Has Alice:', hasAlice) // true
// SPARQL CONSTRUCT query
const graph = db.queryConstruct('CONSTRUCT { ?s ?p ?o } WHERE { ?s ?p ?o }')
console.log('Graph:', graph)
// Count triples
console.log('Triple count:', db.countTriples())
// Named graphs
db.loadTtl('<http://x> <http://y> <http://z> .', 'http://example.org/graph1')2. GraphFrames Analytics (Spark-Compatible)
const {
GraphFrame,
friendsGraph,
completeGraph,
chainGraph,
starGraph,
cycleGraph,
binaryTreeGraph,
bipartiteGraph
} = require('rust-kgdb')
// Create graph from vertices and edges
const graph = new GraphFrame(
JSON.stringify([{id: "alice"}, {id: "bob"}, {id: "carol"}, {id: "dave"}]),
JSON.stringify([
{src: "alice", dst: "bob"},
{src: "bob", dst: "carol"},
{src: "carol", dst: "dave"},
{src: "dave", dst: "alice"}
])
)
// Graph statistics
console.log('Vertices:', graph.vertexCount()) // 4
console.log('Edges:', graph.edgeCount()) // 4
// === PageRank Algorithm ===
const ranks = JSON.parse(graph.pageRank(0.15, 20)) // damping=0.15, iterations=20
console.log('PageRank:', ranks)
// { ranks: { alice: 0.25, bob: 0.25, carol: 0.25, dave: 0.25 } }
// === Connected Components ===
const components = JSON.parse(graph.connectedComponents())
console.log('Components:', components)
// === Triangle Counting (WCOJ Optimized) ===
const k4 = completeGraph(4) // K4 has exactly 4 triangles
console.log('Triangles in K4:', k4.triangleCount()) // 4
const k5 = completeGraph(5) // K5 has exactly 10 triangles (C(5,3))
console.log('Triangles in K5:', k5.triangleCount()) // 10
// === Motif Pattern Matching ===
const chain = chainGraph(4) // v0 -> v1 -> v2 -> v3
// Find single edges
const edges = JSON.parse(chain.find("(a)-[]->(b)"))
console.log('Edge patterns:', edges.length) // 3
// Find two-hop paths
const twoHop = JSON.parse(chain.find("(a)-[]->(b); (b)-[]->(c)"))
console.log('Two-hop patterns:', twoHop.length) // 2 (v0->v1->v2, v1->v2->v3)
// === Factory Functions ===
const friends = friendsGraph() // Social network with 6 vertices
const star = starGraph(5) // Hub with 5 spokes (6 vertices, 5 edges)
const complete = completeGraph(4) // K4 complete graph
const cycle = cycleGraph(5) // Pentagon cycle (5 vertices, 5 edges)
const tree = binaryTreeGraph(3) // Binary tree depth 3
const bipartite = bipartiteGraph(3, 4) // 3 left + 4 right vertices
console.log('Star graph:', star.vertexCount(), 'vertices,', star.edgeCount(), 'edges')
console.log('Cycle graph:', cycle.vertexCount(), 'vertices,', cycle.edgeCount(), 'edges')2b. Motif Pattern Matching (Graph Pattern DSL)
Motifs are recurring structural patterns in graphs. rust-kgdb supports a powerful DSL for finding motifs:
const { GraphFrame, completeGraph, chainGraph, cycleGraph, friendsGraph } = require('rust-kgdb')
// === Basic Motif Syntax ===
// (a)-[]->(b) Single edge from a to b
// (a)-[e]->(b) Named edge 'e' from a to b
// (a)-[]->(b); (b)-[]->(c) Two-hop path (chain pattern)
// !(a)-[]->(b) Negation (edge does NOT exist)
// === Find Single Edges ===
const chain = chainGraph(5) // v0 -> v1 -> v2 -> v3 -> v4
const edges = JSON.parse(chain.find("(a)-[]->(b)"))
console.log('All edges:', edges.length) // 4
// === Two-Hop Paths (Friend-of-Friend Pattern) ===
const twoHop = JSON.parse(chain.find("(a)-[]->(b); (b)-[]->(c)"))
console.log('Two-hop paths:', twoHop.length) // 3
// v0->v1->v2, v1->v2->v3, v2->v3->v4
// === Three-Hop Paths ===
const threeHop = JSON.parse(chain.find("(a)-[]->(b); (b)-[]->(c); (c)-[]->(d)"))
console.log('Three-hop paths:', threeHop.length) // 2
// === Triangle Pattern (Cycle of Length 3) ===
const k4 = completeGraph(4) // K4 has triangles
const triangles = JSON.parse(k4.find("(a)-[]->(b); (b)-[]->(c); (c)-[]->(a)"))
// Filter to avoid counting same triangle multiple times
const uniqueTriangles = triangles.filter(t => t.a < t.b && t.b < t.c)
console.log('Triangles in K4:', uniqueTriangles.length) // 4
// === Star Pattern (Hub with Multiple Spokes) ===
const social = new GraphFrame(
JSON.stringify([
{id: "influencer"},
{id: "follower1"}, {id: "follower2"}, {id: "follower3"}
]),
JSON.stringify([
{src: "influencer", dst: "follower1"},
{src: "influencer", dst: "follower2"},
{src: "influencer", dst: "follower3"}
])
)
// Find hub pattern: someone with 2+ outgoing edges
const hubPattern = JSON.parse(social.find("(hub)-[]->(f1); (hub)-[]->(f2)"))
console.log('Hub patterns (2+ followers):', hubPattern.length)
// === Reciprocal Relationship (Mutual Friends) ===
const mutual = new GraphFrame(
JSON.stringify([{id: "alice"}, {id: "bob"}, {id: "carol"}]),
JSON.stringify([
{src: "alice", dst: "bob"},
{src: "bob", dst: "alice"}, // Reciprocal
{src: "bob", dst: "carol"} // One-way
])
)
const reciprocal = JSON.parse(mutual.find("(a)-[]->(b); (b)-[]->(a)"))
console.log('Mutual relationships:', reciprocal.length) // 2 (alice<->bob counted twice)
// === Diamond Pattern (Common in Fraud Detection) ===
// A -> B, A -> C, B -> D, C -> D (convergence point D)
const diamond = new GraphFrame(
JSON.stringify([{id: "A"}, {id: "B"}, {id: "C"}, {id: "D"}]),
JSON.stringify([
{src: "A", dst: "B"},
{src: "A", dst: "C"},
{src: "B", dst: "D"},
{src: "C", dst: "D"}
])
)
const diamondPattern = JSON.parse(diamond.find(
"(a)-[]->(b); (a)-[]->(c); (b)-[]->(d); (c)-[]->(d)"
))
console.log('Diamond patterns:', diamondPattern.length) // 1
// === Use Case: Fraud Ring Detection ===
// Find circular money transfers: A -> B -> C -> A
const transactions = new GraphFrame(
JSON.stringify([
{id: "acc001"}, {id: "acc002"}, {id: "acc003"}, {id: "acc004"}
]),
JSON.stringify([
{src: "acc001", dst: "acc002", amount: 10000},
{src: "acc002", dst: "acc003", amount: 9900},
{src: "acc003", dst: "acc001", amount: 9800}, // Suspicious cycle!
{src: "acc003", dst: "acc004", amount: 5000} // Normal transfer
])
)
const cycles = JSON.parse(transactions.find(
"(a)-[]->(b); (b)-[]->(c); (c)-[]->(a)"
))
console.log('Circular transfer patterns:', cycles.length) // Found fraud ring!
// === Use Case: Recommendation (Friends-of-Friends not yet connected) ===
const network = friendsGraph()
const fofPattern = JSON.parse(network.find("(a)-[]->(b); (b)-[]->(c)"))
// Filter: a != c and no direct edge a->c (potential recommendation)
console.log('Friend-of-friend patterns for recommendations:', fofPattern.length)Motif Pattern Reference
| Pattern | DSL Syntax | Description |
|---|---|---|
| Edge | (a)-[]->(b) |
Single directed edge |
| Named Edge | (a)-[e]->(b) |
Edge with binding name |
| Two-hop | (a)-[]->(b); (b)-[]->(c) |
Path of length 2 |
| Triangle | (a)-[]->(b); (b)-[]->(c); (c)-[]->(a) |
3-cycle |
| Star | (h)-[]->(a); (h)-[]->(b); (h)-[]->(c) |
Hub pattern |
| Diamond | (a)-[]->(b); (a)-[]->(c); (b)-[]->(d); (c)-[]->(d) |
Convergence |
| Negation | !(a)-[]->(b) |
Edge must NOT exist |
3. EmbeddingService (Vector Similarity & Text Search)
const { EmbeddingService } = require('rust-kgdb')
const service = new EmbeddingService()
// === Store Vector Embeddings (384 dimensions) ===
service.storeVector('entity1', new Array(384).fill(0.1))
service.storeVector('entity2', new Array(384).fill(0.15))
service.storeVector('entity3', new Array(384).fill(0.9))
// Retrieve stored vector
const vec = service.getVector('entity1')
console.log('Vector dimension:', vec.length) // 384
// Count stored vectors
console.log('Total vectors:', service.countVectors()) // 3
// === Similarity Search ===
// Find top 10 entities similar to 'entity1' with threshold 0.0
const similar = JSON.parse(service.findSimilar('entity1', 10, 0.0))
console.log('Similar entities:', similar)
// Returns entities sorted by cosine similarity
// === Multi-Provider Composite Embeddings ===
// Store embeddings from multiple providers (OpenAI, Voyage, Cohere)
service.storeComposite('product_123', JSON.stringify({
openai: new Array(384).fill(0.1),
voyage: new Array(384).fill(0.2),
cohere: new Array(384).fill(0.3)
}))
// Retrieve composite embedding
const composite = service.getComposite('product_123')
console.log('Composite embedding:', composite ? 'stored' : 'not found')
// Count composite embeddings
console.log('Total composites:', service.countComposites())
// === Composite Similarity Search (RRF Aggregation) ===
// Find similar using Reciprocal Rank Fusion across multiple providers
const compositeSimilar = JSON.parse(service.findSimilarComposite('product_123', 10, 0.5, 'rrf'))
console.log('Similar (composite RRF):', compositeSimilar)
// === Use Case: Semantic Product Search ===
// Store product embeddings
const products = ['laptop', 'phone', 'tablet', 'keyboard', 'mouse']
products.forEach((product, i) => {
// In production, use actual embeddings from OpenAI/Cohere/etc
const embedding = new Array(384).fill(0).map((_, j) => Math.sin(i * 0.1 + j * 0.01))
service.storeVector(product, embedding)
})
// Find similar products
const relatedToLaptop = JSON.parse(service.findSimilar('laptop', 5, 0.0))
console.log('Products similar to laptop:', relatedToLaptop)3b. Embedding Triggers (Automatic Embedding Generation)
// Triggers automatically generate embeddings when data changes
// Configure triggers to fire on INSERT/UPDATE/DELETE events
// Example: Auto-embed new entities on insert
const triggerConfig = {
name: 'auto_embed_on_insert',
event: 'AfterInsert',
action: {
type: 'GenerateEmbedding',
source: 'Subject', // Embed the subject of the triple
provider: 'openai' // Use OpenAI provider
}
}
// Multiple triggers for different providers
const triggers = [
{ name: 'embed_openai', provider: 'openai' },
{ name: 'embed_voyage', provider: 'voyage' },
{ name: 'embed_cohere', provider: 'cohere' }
]
// Each trigger fires independently, creating composite embeddings3c. Embedding Providers (Multi-Provider Architecture)
// rust-kgdb supports multiple embedding providers:
//
// Built-in Providers:
// - 'openai' → text-embedding-3-small (1536 or 384 dim)
// - 'voyage' → voyage-2, voyage-lite-02-instruct
// - 'cohere' → embed-v3
// - 'anthropic' → Via Voyage partnership
// - 'mistral' → mistral-embed
// - 'jina' → jina-embeddings-v2
// - 'ollama' → Local models (llama, mistral, etc.)
// - 'hf-tei' → HuggingFace Text Embedding Inference
//
// Provider Configuration (Rust-side):
const providerConfig = {
providers: {
openai: {
api_key: process.env.OPENAI_API_KEY,
model: 'text-embedding-3-small',
dimensions: 384
},
voyage: {
api_key: process.env.VOYAGE_API_KEY,
model: 'voyage-2',
dimensions: 1024
},
cohere: {
api_key: process.env.COHERE_API_KEY,
model: 'embed-english-v3.0',
dimensions: 384
},
ollama: {
base_url: 'http://localhost:11434',
model: 'nomic-embed-text',
dimensions: 768
}
},
default_provider: 'openai'
}
// Why Multi-Provider?
// Google Research (arxiv.org/abs/2508.21038) shows single embeddings hit
// a "recall ceiling" - different providers capture different semantic aspects:
// - OpenAI: General semantic understanding
// - Voyage: Domain-specific (legal, financial, code)
// - Cohere: Multilingual support
// - Ollama: Privacy-preserving local inference
// Aggregation Strategies for composite search:
// - 'rrf' → Reciprocal Rank Fusion (recommended)
// - 'max' → Maximum score across providers
// - 'avg' → Weighted average
// - 'voting' → Consensus (entity must appear in N providers)4. DatalogProgram (Rule-Based Reasoning)
const { DatalogProgram, evaluateDatalog, queryDatalog } = require('rust-kgdb')
const program = new DatalogProgram()
// === Add Facts ===
program.addFact(JSON.stringify({predicate: 'parent', terms: ['alice', 'bob']}))
program.addFact(JSON.stringify({predicate: 'parent', terms: ['bob', 'charlie']}))
program.addFact(JSON.stringify({predicate: 'parent', terms: ['charlie', 'dave']}))
console.log('Facts:', program.factCount()) // 3
// === Add Rules ===
// Rule 1: grandparent(X, Z) :- parent(X, Y), parent(Y, Z)
program.addRule(JSON.stringify({
head: {predicate: 'grandparent', terms: ['?X', '?Z']},
body: [
{predicate: 'parent', terms: ['?X', '?Y']},
{predicate: 'parent', terms: ['?Y', '?Z']}
]
}))
// Rule 2: ancestor(X, Y) :- parent(X, Y)
program.addRule(JSON.stringify({
head: {predicate: 'ancestor', terms: ['?X', '?Y']},
body: [
{predicate: 'parent', terms: ['?X', '?Y']}
]
}))
// Rule 3: ancestor(X, Z) :- parent(X, Y), ancestor(Y, Z) (transitive closure)
program.addRule(JSON.stringify({
head: {predicate: 'ancestor', terms: ['?X', '?Z']},
body: [
{predicate: 'parent', terms: ['?X', '?Y']},
{predicate: 'ancestor', terms: ['?Y', '?Z']}
]
}))
console.log('Rules:', program.ruleCount()) // 3
// === Evaluate Program ===
const result = evaluateDatalog(program)
console.log('Evaluation result:', result)
// === Query Derived Facts ===
const grandparents = JSON.parse(queryDatalog(program, 'grandparent'))
console.log('Grandparent relations:', grandparents)
// alice is grandparent of charlie
// bob is grandparent of dave
const ancestors = JSON.parse(queryDatalog(program, 'ancestor'))
console.log('Ancestor relations:', ancestors)
// alice->bob, alice->charlie, alice->dave
// bob->charlie, bob->dave
// charlie->dave5. Pregel BSP Processing (Bulk Synchronous Parallel)
const {
chainGraph,
starGraph,
cycleGraph,
pregelShortestPaths
} = require('rust-kgdb')
// === Shortest Paths in Chain Graph ===
const chain = chainGraph(10) // v0 -> v1 -> v2 -> ... -> v9
// Run Pregel shortest paths from v0
const chainResult = JSON.parse(pregelShortestPaths(chain, 'v0', 20))
console.log('Chain shortest paths from v0:', chainResult)
// Expected: { v0: 0, v1: 1, v2: 2, v3: 3, ..., v9: 9 }
// === Shortest Paths in Star Graph ===
const star = starGraph(5) // hub connected to spoke0...spoke4
// Run Pregel from hub (center vertex)
const starResult = JSON.parse(pregelShortestPaths(star, 'hub', 10))
console.log('Star shortest paths from hub:', starResult)
// Expected: hub=0, all spokes=1
// === Shortest Paths in Cycle Graph ===
const cycle = cycleGraph(6) // v0 -> v1 -> v2 -> v3 -> v4 -> v5 -> v0
const cycleResult = JSON.parse(pregelShortestPaths(cycle, 'v0', 20))
console.log('Cycle shortest paths from v0:', cycleResult)
// In directed cycle: v0=0, v1=1, v2=2, v3=3, v4=4, v5=5
// === Custom Graph for Pregel ===
const customGraph = new (require('rust-kgdb').GraphFrame)(
JSON.stringify([
{id: "server1"},
{id: "server2"},
{id: "server3"},
{id: "client"}
]),
JSON.stringify([
{src: "client", dst: "server1"},
{src: "client", dst: "server2"},
{src: "server1", dst: "server3"},
{src: "server2", dst: "server3"}
])
)
const networkResult = JSON.parse(pregelShortestPaths(customGraph, 'client', 10))
console.log('Network shortest paths from client:', networkResult)
// client=0, server1=1, server2=1, server3=26. Graph Factory Functions (All Types)
const {
friendsGraph,
chainGraph,
starGraph,
completeGraph,
cycleGraph,
binaryTreeGraph,
bipartiteGraph,
} = require('rust-kgdb')
// === friendsGraph() - Social Network ===
// Pre-built social network for testing
const friends = friendsGraph()
console.log('Friends graph:', friends.vertexCount(), 'people')
// === chainGraph(n) - Linear Path ===
// v0 -> v1 -> v2 -> ... -> v(n-1)
const chain5 = chainGraph(5)
console.log('Chain(5):', chain5.vertexCount(), 'vertices,', chain5.edgeCount(), 'edges')
// 5 vertices, 4 edges
// === starGraph(spokes) - Hub-Spoke ===
// hub -> spoke0, hub -> spoke1, ..., hub -> spoke(n-1)
const star6 = starGraph(6)
console.log('Star(6):', star6.vertexCount(), 'vertices,', star6.edgeCount(), 'edges')
// 7 vertices (1 hub + 6 spokes), 6 edges
// === completeGraph(n) - K_n Complete Graph ===
// Every vertex connected to every other vertex
const k4 = completeGraph(4)
console.log('K4:', k4.vertexCount(), 'vertices,', k4.edgeCount(), 'edges')
// 4 vertices, 6 edges (bidirectional = 12)
console.log('K4 triangles:', k4.triangleCount()) // 4 triangles
// === cycleGraph(n) - Circular ===
// v0 -> v1 -> v2 -> ... -> v(n-1) -> v0
const cycle5 = cycleGraph(5)
console.log('Cycle(5):', cycle5.vertexCount(), 'vertices,', cycle5.edgeCount(), 'edges')
// 5 vertices, 5 edges
// === binaryTreeGraph(depth) - Binary Tree ===
// Complete binary tree with given depth
const tree3 = binaryTreeGraph(3)
console.log('BinaryTree(3):', tree3.vertexCount(), 'vertices')
// 2^4 - 1 = 15 vertices for depth 3
// === bipartiteGraph(left, right) - Two Sets ===
// All left vertices connected to all right vertices
const bp34 = bipartiteGraph(3, 4)
console.log('Bipartite(3,4):', bp34.vertexCount(), 'vertices,', bp34.edgeCount(), 'edges')
// 7 vertices, 12 edges (3 * 4)Core RDF/SPARQL Database
This npm package provides the high-performance in-memory database. For distributed cluster deployment (1B+ triples, horizontal scaling), contact: gonnect.uk@gmail.com
Deployment Modes
rust-kgdb supports three deployment modes:
| Mode | Use Case | Scalability | This Package |
|---|---|---|---|
| In-Memory | Development, embedded apps, testing | Single node, volatile | ✅ Included |
| Single Node (RocksDB/LMDB) | Production, persistence needed | Single node, persistent | Via Rust crate |
| Distributed Cluster | Enterprise, 1B+ triples | Horizontal scaling, 9+ partitions | Contact us |
Distributed Cluster Mode (Enterprise)
For enterprise deployments requiring 1B+ triples and horizontal scaling:
Key Features:
- Subject-Anchored Partitioning: All triples for a subject are guaranteed on the same partition for optimal locality
- Arrow-Powered OLAP: High-performance analytical queries executed as optimized SQL at scale
- Automatic Query Routing: The coordinator intelligently routes queries to the right executors
- Kubernetes-Native: StatefulSet-based executors with automatic failover
- Linear Horizontal Scaling: Add more executor pods to scale throughput
How It Works:
Your SPARQL queries work unchanged. For large-scale aggregations, the cluster automatically optimizes execution:
-- Your SPARQL query
SELECT (COUNT(*) AS ?count) (AVG(?salary) AS ?avgSalary)
WHERE {
?employee <http://ex/type> <http://ex/Employee> .
?employee <http://ex/salary> ?salary .
}
-- Cluster executes as optimized SQL internally
-- Results aggregated across all partitions automaticallyRequest a demo: gonnect.uk@gmail.com
Why rust-kgdb?
| Feature | rust-kgdb | Apache Jena | RDFox |
|---|---|---|---|
| Lookup Speed | 2.78 µs | ~50 µs | 50-100 µs |
| Memory/Triple | 24 bytes | 50-60 bytes | 32 bytes |
| SPARQL 1.1 | 100% | 100% | 95% |
| RDF 1.2 | 100% | Partial | No |
| WCOJ | ✅ LeapFrog | ❌ | ❌ |
| Mobile-Ready | ✅ iOS/Android | ❌ | ❌ |
Core Technical Innovations
1. Worst-Case Optimal Joins (WCOJ)
Traditional databases use nested-loop joins with O(n²) to O(n⁴) complexity. rust-kgdb implements the LeapFrog TrieJoin algorithm—a worst-case optimal join that achieves O(n log n) for multi-way joins.
How it works:
- Trie Data Structure: Triples indexed hierarchically (S→P→O) using BTreeMap for sorted access
- Variable Ordering: Frequency-based analysis orders variables for optimal intersection
- LeapFrog Iterator: Binary search across sorted iterators finds intersections without materializing intermediate results
Query: SELECT ?x ?y ?z WHERE { ?x :p ?y . ?y :q ?z . ?x :r ?z }
Nested Loop: O(n³) - examines every combination
WCOJ: O(n log n) - iterates in sorted order, seeks forward on mismatch| Query Pattern | Before (Nested Loop) | After (WCOJ) | Speedup |
|---|---|---|---|
| 3-way star | O(n³) | O(n log n) | 50-100x |
| 4+ way complex | O(n⁴) | O(n log n) | 100-1000x |
| Chain queries | O(n²) | O(n log n) | 10-20x |
2. Sparse Matrix Engine (CSR Format)
Binary relations (e.g., foaf:knows, rdfs:subClassOf) are converted to Compressed Sparse Row (CSR) matrices for cache-efficient join evaluation:
- Memory: O(nnz) where nnz = number of edges (not O(n²))
- Matrix Multiplication: Replaces nested-loop joins
- Transitive Closure: Semi-naive Δ-matrix evaluation (not iterated powers)
// Traditional: O(n²) nested loops
for (s, p, o) in triples { ... }
// CSR Matrix: O(nnz) cache-friendly iteration
row_ptr[i] → col_indices[j] → values[j]Used for: RDFS/OWL reasoning, transitive closure, Datalog evaluation.
3. SIMD + PGO Compiler Optimizations
Zero code changes—pure compiler-level performance gains.
| Optimization | Technology | Effect |
|---|---|---|
| SIMD Vectorization | AVX2/BMI2 (Intel), NEON (ARM) | 8-wide parallel operations |
| Profile-Guided Optimization | LLVM PGO | Hot path optimization, branch prediction |
| Link-Time Optimization | LTO (fat) | Cross-crate inlining, dead code elimination |
Benchmark Results (LUBM, Intel Skylake):
| Query | Before | After (SIMD+PGO) | Improvement |
|---|---|---|---|
| Q5: 2-hop chain | 230ms | 53ms | 77% faster |
| Q3: 3-way star | 177ms | 62ms | 65% faster |
| Q4: 3-hop chain | 254ms | 101ms | 60% faster |
| Q8: Triangle | 410ms | 193ms | 53% faster |
| Q7: Hierarchy | 343ms | 198ms | 42% faster |
| Q6: 6-way complex | 641ms | 464ms | 28% faster |
| Q2: 5-way star | 234ms | 183ms | 22% faster |
| Q1: 4-way star | 283ms | 258ms | 9% faster |
Average speedup: 44.5% across all queries.
4. Quad Indexing (SPOC)
Four complementary indexes enable O(1) pattern matching regardless of query shape:
| Index | Pattern | Use Case |
|---|---|---|
| SPOC | (?s, ?p, ?o, ?g) |
Subject-centric queries |
| POCS | (?p, ?o, ?c, ?s) |
Property enumeration |
| OCSP | (?o, ?c, ?s, ?p) |
Object lookups (reverse links) |
| CSPO | (?c, ?s, ?p, ?o) |
Named graph iteration |
Storage Backends
rust-kgdb uses a pluggable storage architecture. Default is in-memory (zero configuration). For persistence, enable RocksDB.
| Backend | Feature Flag | Use Case | Status |
|---|---|---|---|
| InMemory | default |
Development, testing, embedded | ✅ Production Ready |
| RocksDB | rocksdb-backend |
Production, large datasets | ✅ 61 tests passing |
| LMDB | lmdb-backend |
Read-heavy workloads | ✅ 31 tests passing |
InMemory (Default)
Zero configuration, maximum performance. Data is volatile (lost on process exit).
High-Performance Data Structures:
| Component | Structure | Why |
|---|---|---|
| Triple Store | DashMap |
Lock-free concurrent hash map, 100K pre-allocation |
| WCOJ Trie | BTreeMap |
Sorted iteration for LeapFrog intersection |
| Dictionary | FxHashSet |
String interning with rustc-optimized hashing |
| Hypergraph | FxHashMap |
Fast node→edge adjacency lists |
| Reasoning | AHashMap |
RDFS/OWL inference with DoS-resistant hashing |
| Datalog | FxHashMap |
Semi-naive evaluation with delta propagation |
Why these structures enable sub-microsecond performance:
- DashMap: Sharded locks (16 shards default) → near-linear scaling on multi-core
- FxHashMap: Rust compiler's hash function → 30% faster than std HashMap
- BTreeMap: O(log n) ordered iteration → enables binary search in LeapFrog
- Pre-allocation: 100K capacity avoids rehashing during bulk inserts
use storage::{QuadStore, InMemoryBackend};
let store = QuadStore::new(InMemoryBackend::new());
// Ultra-fast: 2.78 µs lookups, zero disk I/ORocksDB (Persistent)
LSM-tree based storage with ACID transactions. Tested with 61 comprehensive tests.
# Cargo.toml - Enable RocksDB backend
[dependencies]
storage = { version = "0.1.10", features = ["rocksdb-backend"] }use storage::{QuadStore, RocksDbBackend};
// Create persistent database
let backend = RocksDbBackend::new("/path/to/data")?;
let store = QuadStore::new(backend);
// Features:
// - ACID transactions
// - Snappy compression (automatic)
// - Crash recovery
// - Range & prefix scanning
// - 1MB+ value support
// Force sync to disk
store.flush()?;RocksDB Test Coverage:
- Basic CRUD operations (14 tests)
- Range scanning (8 tests)
- Prefix scanning (6 tests)
- Batch operations (8 tests)
- Transactions (8 tests)
- Concurrent access (5 tests)
- Unicode & binary data (4 tests)
- Large key/value handling (8 tests)
LMDB (Memory-Mapped Persistent)
B+tree based storage with memory-mapped I/O (via heed crate). Optimized for read-heavy workloads with MVCC (Multi-Version Concurrency Control). Tested with 31 comprehensive tests.
# Cargo.toml - Enable LMDB backend
[dependencies]
storage = { version = "0.1.12", features = ["lmdb-backend"] }use storage::{QuadStore, LmdbBackend};
// Create persistent database (default 10GB map size)
let backend = LmdbBackend::new("/path/to/data")?;
let store = QuadStore::new(backend);
// Or with custom map size (1GB)
let backend = LmdbBackend::with_map_size("/path/to/data", 1024 * 1024 * 1024)?;
// Features:
// - Memory-mapped I/O (zero-copy reads)
// - MVCC for concurrent readers
// - Crash-safe ACID transactions
// - Range & prefix scanning
// - Excellent for read-heavy workloads
// Sync to disk
store.flush()?;When to use LMDB vs RocksDB:
| Characteristic | LMDB | RocksDB |
|---|---|---|
| Read Performance | ✅ Faster (memory-mapped) | Good |
| Write Performance | Good | ✅ Faster (LSM-tree) |
| Concurrent Readers | ✅ Unlimited | Limited by locks |
| Write Amplification | Low | Higher (compaction) |
| Memory Usage | Higher (map size) | Lower (cache-based) |
| Best For | Read-heavy, OLAP | Write-heavy, OLTP |
LMDB Test Coverage:
- Basic CRUD operations (8 tests)
- Range scanning (4 tests)
- Prefix scanning (3 tests)
- Batch operations (3 tests)
- Large key/value handling (4 tests)
- Concurrent access (4 tests)
- Statistics & flush (3 tests)
- Edge cases (2 tests)
TypeScript SDK
The npm package uses the in-memory backend—ideal for:
- Knowledge graph queries
- SPARQL execution
- Data transformation pipelines
- Embedded applications
import { GraphDB } from 'rust-kgdb'
// In-memory database (default, no configuration needed)
const db = new GraphDB('http://example.org/app')
// For persistence, export via CONSTRUCT:
const ntriples = db.queryConstruct('CONSTRUCT { ?s ?p ?o } WHERE { ?s ?p ?o }')
fs.writeFileSync('backup.nt', ntriples)Installation
npm install rust-kgdbPlatform Support (v0.2.1)
| Platform | Architecture | Status | Notes |
|---|---|---|---|
| macOS | Intel (x64) | ✅ Works out of the box | Pre-built binary included |
| macOS | Apple Silicon (arm64) | ⏳ v0.2.2 | Coming soon |
| Linux | x64 | ⏳ v0.2.2 | Coming soon |
| Linux | arm64 | ⏳ v0.2.2 | Coming soon |
| Windows | x64 | ⏳ v0.2.2 | Coming soon |
This release (v0.2.1) includes pre-built binary for macOS x64 only. Other platforms will be added in the next release.
Quick Start
Complete Working Example
import { GraphDB } from 'rust-kgdb'
// 1. Create database
const db = new GraphDB('http://example.org/myapp')
// 2. Load data (Turtle format)
db.loadTtl(`
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix ex: <http://example.org/> .
ex:alice a foaf:Person ;
foaf:name "Alice" ;
foaf:age 30 ;
foaf:knows ex:bob, ex:charlie .
ex:bob a foaf:Person ;
foaf:name "Bob" ;
foaf:age 25 ;
foaf:knows ex:charlie .
ex:charlie a foaf:Person ;
foaf:name "Charlie" ;
foaf:age 35 .
`, null)
// 3. Query: Find friends-of-friends (WCOJ optimized!)
const fof = db.querySelect(`
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX ex: <http://example.org/>
SELECT ?person ?friend ?fof WHERE {
?person foaf:knows ?friend .
?friend foaf:knows ?fof .
FILTER(?person != ?fof)
}
`)
console.log('Friends of Friends:', fof)
// [{ person: 'ex:alice', friend: 'ex:bob', fof: 'ex:charlie' }]
// 4. Aggregation: Average age
const stats = db.querySelect(`
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT (COUNT(?p) AS ?count) (AVG(?age) AS ?avgAge) WHERE {
?p a foaf:Person ; foaf:age ?age .
}
`)
console.log('Stats:', stats)
// [{ count: '3', avgAge: '30.0' }]
// 5. ASK query
const hasAlice = db.queryAsk(`
PREFIX ex: <http://example.org/>
ASK { ex:alice a <http://xmlns.com/foaf/0.1/Person> }
`)
console.log('Has Alice?', hasAlice) // true
// 6. CONSTRUCT query
const graph = db.queryConstruct(`
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX ex: <http://example.org/>
CONSTRUCT { ?p foaf:knows ?f }
WHERE { ?p foaf:knows ?f }
`)
console.log('Extracted graph:', graph)
// 7. Count and cleanup
console.log('Triple count:', db.count()) // 11
db.clear()Save to File
import { writeFileSync } from 'fs'
// Save as N-Triples
const db = new GraphDB('http://example.org/export')
db.loadTtl(`<http://example.org/s> <http://example.org/p> "value" .`, null)
const ntriples = db.queryConstruct(`CONSTRUCT { ?s ?p ?o } WHERE { ?s ?p ?o }`)
writeFileSync('output.nt', ntriples)SPARQL 1.1 Features (100% W3C Compliant)
Query Forms
// SELECT - return bindings
db.querySelect('SELECT ?s ?p ?o WHERE { ?s ?p ?o } LIMIT 10')
// ASK - boolean existence check
db.queryAsk('ASK { <http://example.org/x> ?p ?o }')
// CONSTRUCT - build new graph
db.queryConstruct('CONSTRUCT { ?s <http://new/prop> ?o } WHERE { ?s ?p ?o }')Aggregates
db.querySelect(`
SELECT ?type (COUNT(*) AS ?count) (AVG(?value) AS ?avg)
WHERE { ?s a ?type ; <http://ex/value> ?value }
GROUP BY ?type
HAVING (COUNT(*) > 5)
ORDER BY DESC(?count)
`)Property Paths
// Transitive closure (rdfs:subClassOf*)
db.querySelect('SELECT ?class WHERE { ?class rdfs:subClassOf* <http://top/Class> }')
// Alternative paths
db.querySelect('SELECT ?name WHERE { ?x (foaf:name|rdfs:label) ?name }')
// Sequence paths
db.querySelect('SELECT ?grandparent WHERE { ?x foaf:parent/foaf:parent ?grandparent }')Named Graphs
// Load into named graph
db.loadTtl('<http://s> <http://p> "o" .', 'http://example.org/graph1')
// Query specific graph
db.querySelect(`
SELECT ?s ?p ?o WHERE {
GRAPH <http://example.org/graph1> { ?s ?p ?o }
}
`)UPDATE Operations
// INSERT DATA - Add new triples
db.updateInsert(`
PREFIX ex: <http://example.org/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
INSERT DATA {
ex:david a foaf:Person ;
foaf:name "David" ;
foaf:age 28 ;
foaf:email "david@example.org" .
ex:project1 ex:hasLead ex:david ;
ex:budget 50000 ;
ex:status "active" .
}
`)
// Verify insert
const count = db.count()
console.log(`Total triples after insert: ${count}`)
// DELETE WHERE - Remove matching triples
db.updateDelete(`
PREFIX ex: <http://example.org/>
DELETE WHERE { ?s ex:status "completed" }
`)Bulk Data Loading Example
import { GraphDB } from 'rust-kgdb'
import { readFileSync } from 'fs'
const db = new GraphDB('http://example.org/bulk-load')
// Load Turtle file
const ttlData = readFileSync('data/knowledge-graph.ttl', 'utf-8')
db.loadTtl(ttlData, null) // null = default graph
// Load into named graph
const orgData = readFileSync('data/organization.ttl', 'utf-8')
db.loadTtl(orgData, 'http://example.org/graphs/org')
// Load N-Triples format
const ntData = readFileSync('data/triples.nt', 'utf-8')
db.loadNTriples(ntData, null)
console.log(`Loaded ${db.count()} triples`)
// Query across all graphs
const results = db.querySelect(`
SELECT ?g (COUNT(*) AS ?count) WHERE {
GRAPH ?g { ?s ?p ?o }
}
GROUP BY ?g
`)
console.log('Triples per graph:', results)Sample Application
Knowledge Graph Demo
A complete, production-ready sample application demonstrating enterprise knowledge graph capabilities is available in the repository.
Location: examples/knowledge-graph-demo/
Features Demonstrated:
- Complete organizational knowledge graph (employees, departments, projects, skills)
- SPARQL SELECT queries with star and chain patterns (WCOJ-optimized)
- Aggregations (COUNT, AVG, GROUP BY, HAVING)
- Property paths for transitive closure (organizational hierarchy)
- SPARQL ASK and CONSTRUCT queries
- Named graphs for multi-tenant data isolation
- Data export to Turtle format
Run the Demo:
cd examples/knowledge-graph-demo
npm install
npm startSample Output:
The demo creates a realistic knowledge graph with:
- 5 employees across 4 departments
- 13 technical and soft skills
- 2 software projects
- Reporting hierarchies and salary data
- Named graph for sensitive compensation data
Example Query from Demo (finds all direct and indirect reports):
const pathQuery = `
PREFIX ex: <http://example.org/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?employee ?name WHERE {
?employee ex:reportsTo+ ex:alice . # Transitive closure
?employee foaf:name ?name .
}
ORDER BY ?name
`
const results = db.querySelect(pathQuery)Learn More: See the demo README for full documentation, query examples, and how to customize the knowledge graph.
API Reference
GraphDB Class
class GraphDB {
constructor(baseUri: string) // Create with base URI
static inMemory(): GraphDB // Create anonymous in-memory DB
// Data Loading
loadTtl(data: string, graph: string | null): void
loadNTriples(data: string, graph: string | null): void
// SPARQL Queries (WCOJ-optimized)
querySelect(sparql: string): Array<Record<string, string>>
queryAsk(sparql: string): boolean
queryConstruct(sparql: string): string // Returns N-Triples
// SPARQL Updates
updateInsert(sparql: string): void
updateDelete(sparql: string): void
// Database Operations
count(): number
clear(): void
getVersion(): string
}Node Class
class Node {
static iri(uri: string): Node
static literal(value: string): Node
static langLiteral(value: string, lang: string): Node
static typedLiteral(value: string, datatype: string): Node
static integer(value: number): Node
static boolean(value: boolean): Node
static blank(id: string): Node
}Performance Characteristics
Complexity Analysis
| Operation | Complexity | Notes |
|---|---|---|
| Triple lookup | O(1) | Hash-based SPOC index |
| Pattern scan | O(k) | k = matching triples |
| Star join (WCOJ) | O(n log n) | LeapFrog intersection |
| Complex join (WCOJ) | O(n log n) | Trie-based |
| Transitive closure | O(n²) worst | CSR matrix optimization |
| Bulk insert | O(n) | Batch indexing |
Memory Layout
Triple: 24 bytes
├── Subject: 8 bytes (dictionary ID)
├── Predicate: 8 bytes (dictionary ID)
└── Object: 8 bytes (dictionary ID)
String Interning: All URIs/literals stored once in Dictionary
Index Overhead: ~4x base triple size (4 indexes)
Total: ~120 bytes/triple including indexesPerformance Benchmarks
By Deployment Mode
| Mode | Lookup | Insert | Memory | Dataset Size |
|---|---|---|---|---|
| In-Memory (npm) | 2.78 µs | 146K/sec | 24 bytes/triple | <10M triples |
| Single Node (RocksDB) | 5-10 µs | 100K/sec | On-disk | <100M triples |
| Distributed Cluster | 10-50 µs | 500K+/sec* | Distributed | 1B+ triples |
*Aggregate throughput across all executors with HDRF partitioning
SIMD + PGO Query Performance (LUBM Benchmark)
| Query | Pattern | Time | Improvement |
|---|---|---|---|
| Q5 | 2-hop chain | 53ms | 77% faster |
| Q3 | 3-way star | 62ms | 65% faster |
| Q4 | 3-hop chain | 101ms | 60% faster |
| Q8 | Triangle | 193ms | 53% faster |
| Q7 | Hierarchy | 198ms | 42% faster |
Average: 44.5% speedup with zero code changes (compiler optimizations only).
Version History
v0.2.2 (2025-12-08) - Enhanced Documentation
- Added comprehensive INSERT DATA examples with PREFIX syntax
- Added bulk data loading example with named graphs
- Enhanced SPARQL UPDATE section with real-world patterns
- Improved documentation for data import workflows
v0.2.1 (2025-12-08) - npm Platform Fix
- Fixed native module loading for platform-specific binaries
- This release includes pre-built binary for macOS x64 only
- Other platforms coming in next release
v0.2.0 (2025-12-08) - Distributed Cluster Support
- NEW: Distributed cluster architecture with HDRF partitioning
- Subject-Hash Filter for accurate COUNT deduplication across replicas
- Arrow-powered OLAP query path for high-performance analytical queries
- Coordinator-Executor pattern with gRPC communication
- 9-partition default for optimal data distribution
- Contact for cluster deployment: gonnect.uk@gmail.com
- Coming soon: Embedding support for semantic search (v0.3.0)
v0.1.12 (2025-12-01) - LMDB Backend Release
- LMDB storage backend fully implemented (31 tests passing)
- Memory-mapped I/O for optimal read performance
- MVCC concurrency for unlimited concurrent readers
- Complete LMDB vs RocksDB comparison documentation
- Sample application with 87 triples demonstrating all features
v0.1.9 (2025-12-01) - SIMD + PGO Release
- 44.5% average speedup via SIMD + PGO compiler optimizations
- WCOJ execution with LeapFrog TrieJoin
- Release automation infrastructure
- All packages updated to gonnect-uk namespace
v0.1.8 (2025-12-01) - WCOJ Execution
- WCOJ execution path activated
- Variable ordering analysis for optimal joins
- 577 tests passing
v0.1.7 (2025-11-30)
- Query optimizer with automatic strategy selection
- WCOJ algorithm integration (planning phase)
v0.1.3 (2025-11-18)
- Initial TypeScript SDK
- 100% W3C SPARQL 1.1 compliance
- 100% W3C RDF 1.2 compliance
Use Cases
| Domain | Application |
|---|---|
| Knowledge Graphs | Enterprise ontologies, taxonomies |
| Semantic Search | Structured queries over unstructured data |
| Data Integration | ETL with SPARQL CONSTRUCT |
| Compliance | SHACL validation, provenance tracking |
| Graph Analytics | Pattern detection, community analysis |
| Mobile Apps | Embedded RDF on iOS/Android |
Links
License
Apache License 2.0
Built with Rust + NAPI-RS