JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 993
  • Score
    100M100P100Q110973F
  • License MIT

Fetch DB entries in batches to improve performance while respecting IPC size constraints

Package Exports

  • dexie-batch

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (dexie-batch) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

dexie-batch Build Status

Fetch IndexedDB entries in batches to improve performance while avoiding errors like Maximum IPC message size exceeded.

Installation

If you are using some kind of module bundler:

npm i dexie-batch

Alternatively, you can use one of the pre-built scripts and include it after the script for Dexie:

<script src="https://unpkg.com/dexie-batch/dist/dexie-batch.min.js"></script>

This way, DexieBatch will be available as a global variable.

Usage

import DexieBatch from 'dexie-batch'
import table from './my-awesome-dexie-table'

const collection = table.toCollection()

// Will fetch 99 items in batches of size 25 when used
const batchDriver = new DexieBatch({ batchSize: 25, limit: 99 })

// You can check if an instance will fetch batches concurrently
if (batchDriver.isParallel()) { // true in this case
  console.log('Fetching batches concurrently!')
}

batchDriver.each(collection, (entry, idx) => {
  // Process each item individually
}).then(n => console.log(`Fetched ${n} batches`))

batchDriver.eachBatch(collection, (batch, batchIdx) => {
  // Process each batch (array of entries) individually
}).then(n => console.log(`Fetched ${n} batches`))

The returned Dexie.Promise resolves when all batch operations have finished. If the user callback returns a Promise it is waited upon.

The batchSize option is mandatory since a sensible value depends strongly on the individual record size.

Batches are requested in parallel iff limit option is present. Otherwise we would not know when to stop sending requests. When no limit is given, batches are requested serially until one request gives an empty result.