JSPM

@unidev-hub/cache

1.0.0
  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 2
  • Score
    100M100P100Q22159F
  • License MIT

A unified caching layer with support for in-memory caching, Redis, tiered strategies, and distributed locks

Package Exports

  • @unidev-hub/cache
  • @unidev-hub/cache/dist/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@unidev-hub/cache) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

@unidev-hub/cache

A unified caching layer for Node.js applications that provides a consistent interface to different caching backends with advanced features.

Features

  • 🚀 Multiple Cache Providers

    • In-memory caching for single-instance applications
    • Redis-based caching for distributed applications
    • Tiered caching for optimal performance (e.g., memory L1 + Redis L2)
  • 💪 Advanced Caching Strategies

    • TTL-based expiration
    • LRU/LFU eviction policies
    • Write-through, write-behind, and write-around strategies
  • 🔄 Cache Invalidation

    • Pattern-based invalidation (supports wildcards)
    • Tag-based invalidation for related items
    • Version-based invalidation (for bulk invalidation)
  • 🔒 Distributed Locks

    • Memory-based locks for single-instance applications
    • Redis-based locks for distributed coordination
    • Auto-extending locks to prevent expiration during long operations
  • ⚙️ Flexible Configuration

    • Simple API with sensible defaults
    • Highly customizable caching behavior
    • Built-in logging and monitoring

Installation

npm install @unidev-hub/cache

or

yarn add @unidev-hub/cache

Quick Start

import { CacheManager } from '@unidev-hub/cache';

// Create an in-memory cache
const cache = new CacheManager();

// Basic operations
await cache.set('key', { hello: 'world' });
const value = await cache.get('key');
console.log(value); // { hello: 'world' }

// With TTL (time-to-live)
await cache.set('temporary', 'expires soon', 5000); // 5 seconds

// Cache or compute pattern
const expensiveValue = await cache.getOrSet(
  'expensive-operation',
  async () => {
    // This function only runs if the key isn't in the cache
    console.log('Computing expensive value...');
    return await someExpensiveOperation();
  },
  3600000 // 1 hour TTL
);

Cache Providers

In-Memory Cache

The default provider is an in-memory cache, suitable for single-instance applications or testing.

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager({
  memory: {
    maxItems: 10000,   // Maximum number of items to store
    maxSize: 100000000, // Maximum cache size in bytes (100MB)
    evictionStrategy: 'lru', // 'lru', 'lfu', or 'fifo'
    checkExpirationInterval: 60000, // Check for expired items every 1 minute
  }
});

Redis Cache

For distributed applications, use the Redis cache provider:

import { CacheManager } from '@unidev-hub/cache';
import Redis from 'ioredis';

const redis = new Redis({
  host: 'localhost',
  port: 6379,
  // other Redis options
});

const cache = new CacheManager({
  redis: {
    client: redis,    // Existing Redis client
    // Or connection options:
    // connection: {
    //   host: 'localhost',
    //   port: 6379,
    //   password: 'secret',
    //   db: 0
    // },
    ttl: 3600000,     // Default TTL (1 hour)
    keyPrefix: 'app:cache:',  // Prefix for all keys
  }
});

Tiered Cache

Combine multiple cache providers in a tiered architecture for optimal performance:

import { CacheManager, MemoryCache, RedisCache } from '@unidev-hub/cache';
import Redis from 'ioredis';

const redis = new Redis();

const cache = new CacheManager({
  tiered: {
    providers: [
      {
        provider: new MemoryCache({ maxItems: 10000 }),
        options: {
          writeOnHitFromLowerTier: true,
          ttl: 300000, // 5 minutes for L1 (memory)
        }
      },
      {
        provider: new RedisCache({ client: redis }),
        options: {
          ttl: 3600000, // 1 hour for L2 (Redis)
        }
      }
    ],
    writeStrategy: 'write-through', // 'write-through', 'write-behind', or 'write-around'
    propagateDeletes: true,   // Delete from all tiers
    propagateUpdates: true,   // Update all tiers
  }
});

Cache Invalidation

Pattern-based Invalidation

Invalidate cache entries using wildcard patterns:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager({
  invalidation: {
    method: 'pattern',
    maxKeys: 1000, // Max keys to invalidate in one operation
  }
});

// Set some values
await cache.set('user:1:profile', { name: 'Alice' });
await cache.set('user:1:settings', { theme: 'dark' });
await cache.set('user:2:profile', { name: 'Bob' });

// Invalidate all keys for user 1
const invalidatedCount = await cache.invalidate('user:1:*');
console.log(`Invalidated ${invalidatedCount} entries`); // 2

Tag-based Invalidation

Associate cache entries with tags and invalidate by tag:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager({
  invalidation: {
    method: 'tag',
  }
});

// Set values with tags
await cache.set('user:1', { name: 'Alice' }, 3600000, {
  method: 'tag',
  tags: ['users', 'user-1']
});

await cache.set('user:2', { name: 'Bob' }, 3600000, {
  method: 'tag',
  tags: ['users', 'user-2']
});

// Invalidate by a single tag
await cache.invalidate(undefined, ['users']); // All users

// Invalidate by multiple tags (AND operation)
await cache.invalidate(undefined, ['users', 'vip']); // Users who are VIPs

Version-based Invalidation

Namespace your cache keys with versions for bulk invalidation:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager({
  invalidation: {
    method: 'version',
  }
});

// Set values in a namespace
await cache.set('product:1', { name: 'Laptop' }, 3600000, {
  method: 'version',
  namespace: 'products'
});

await cache.set('product:2', { name: 'Phone' }, 3600000, {
  method: 'version',
  namespace: 'products'
});

// Invalidate an entire namespace by incrementing its version
await cache.invalidate(undefined, undefined, 'products');

Distributed Locks

Basic Lock Usage

Acquire locks to ensure exclusive access to resources:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager({
  redis: {
    // Redis configuration
  }
});

// Get the lock manager
const lockManager = cache.getLockManager();

// Acquire a lock
const lock = await lockManager.acquire(
  'resource-name',
  10000, // TTL: 10 seconds
  {
    wait: true, // Wait if lock is held
    retryDelay: 200, // 200ms between retries
    maxRetries: 50,  // Max retry attempts
    autoExtend: true, // Auto-extend the lock while held
  }
);

if (lock) {
  try {
    // Perform operations with exclusive access
    await someOperation();
  } finally {
    // Always release the lock when done
    await lockManager.release(lock);
  }
} else {
  console.log('Failed to acquire lock');
}

Using withLock Helper

Simplify locking with the withLock helper:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager();
const lockManager = cache.getLockManager();

// Execute function with automatic lock management
const result = await lockManager.withLock(
  'resource-name',
  async (lock) => {
    // This code runs with exclusive access
    return await someOperation();
  },
  10000, // TTL: 10 seconds
  {
    wait: true,
    maxRetries: 50,
  }
);

Caching with Locking

Use wrapWithLock to combine caching and locking:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager();

// Create a function that caches results and ensures exclusive access
const getCachedUserWithLock = cache.wrapWithLock(
  async (userId) => {
    // This expensive operation runs with exclusive access
    // and only when the result isn't cached
    return await fetchUserFromDatabase(userId);
  },
  (userId) => `user:${userId}`, // Key generator
  { ttl: 5000, wait: true }, // Lock options
  3600000, // Cache TTL: 1 hour
  { method: 'tag', tags: ['users'] } // Invalidation options
);

// Usage
const user = await getCachedUserWithLock(123);

Advanced Usage

Function Wrapping

Automatically cache function results:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager();

// Wrap a function with caching
const getCachedUser = cache.wrap(
  async (userId) => {
    console.log('Fetching user from database...');
    return await fetchUserFromDatabase(userId);
  },
  (userId) => `user:${userId}`, // Key generator function
  3600000 // 1 hour TTL
);

// First call executes the function
const user1 = await getCachedUser(123);

// Second call retrieves from cache
const user2 = await getCachedUser(123);

Custom Key Generation

Generate consistent cache keys for complex objects:

import { CacheManager } from '@unidev-hub/cache';

// Generate a cache key from an object
const key = CacheManager.generateKey({
  userId: 123,
  filters: { status: 'active', role: 'admin' },
  sort: { field: 'created', order: 'desc' }
}, {
  prefix: 'users:list:',
  includeType: true,
  hash: true, // Hash the key for length consistency
});

// Use the generated key
const cache = new CacheManager();
await cache.set(key, resultData);

Batch Operations

Perform operations on multiple keys at once:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager();

// Set multiple values
const values = new Map();
values.set('key1', 'value1');
values.set('key2', 'value2');
values.set('key3', 'value3');
await cache.setMany(values, 3600000);

// Get multiple values
const keys = ['key1', 'key2', 'key4'];
const results = await cache.getMany(keys);
console.log(results.get('key1')); // 'value1'
console.log(results.has('key4')); // false (not found)

// Delete multiple values
await cache.deleteMany(['key1', 'key2']);

Monitoring and Statistics

Get cache statistics for monitoring:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager();

// Get cache statistics
const stats = await cache.getStats();
console.log(`Cache size: ${stats.size} items`);
console.log(`Hit ratio: ${stats.hitRatio * 100}%`);
console.log(`Hits: ${stats.hits}, Misses: ${stats.misses}`);

Logging

Enable logging to debug cache operations:

import { CacheManager } from '@unidev-hub/cache';

const cache = new CacheManager({
  enableLogging: true,
  logLevel: 'debug', // 'debug', 'info', 'warn', or 'error'
});

Configuration Reference

Global Options

interface CacheOptions {
  // Default TTL for cache entries in milliseconds
  ttl?: number; // Default: 3600000 (1 hour)
  
  // Key prefix for all cache entries
  keyPrefix?: string; // Default: 'cache:'
  
  // Whether to serialize values (helps with complex objects)
  serialize?: boolean; // Default: true
  
  // Logging options
  enableLogging?: boolean; // Default: false
  logLevel?: 'debug' | 'info' | 'warn' | 'error'; // Default: 'info'
  
  // Provider-specific options
  memory?: MemoryCacheOptions;
  redis?: RedisCacheOptions;
  tiered?: TieredCacheOptions;
  
  // Invalidation options
  invalidation?: InvalidationOptions;
}

Memory Cache Options

interface MemoryCacheOptions {
  // Maximum number of items in the cache
  maxItems?: number; // Default: 1000
  
  // Maximum size of the cache in bytes
  maxSize?: number;
  
  // Eviction strategy when cache is full
  evictionStrategy?: 'lru' | 'lfu' | 'fifo'; // Default: 'lru'
  
  // Check interval for expired items in milliseconds
  checkExpirationInterval?: number; // Default: 60000 (1 minute)
  
  // Whether to delete expired items on get
  deleteExpiredOnGet?: boolean; // Default: true
}

Redis Cache Options

interface RedisCacheOptions {
  // Existing Redis client
  client?: IORedis;
  
  // Or connection options
  connection?: {
    host?: string; // Default: 'localhost'
    port?: number; // Default: 6379
    username?: string;
    password?: string;
    db?: number; // Default: 0
    tls?: boolean;
    url?: string;
  };
  
  // Error handling options
  handleErrors?: boolean; // Default: true
  maxReconnectAttempts?: number; // Default: 10
  reconnectStrategy?: 'linear' | 'exponential'; // Default: 'exponential'
}

Tiered Cache Options

interface TieredCacheOptions {
  // Array of cache providers in order of access (L1, L2, ...)
  providers: {
    provider: CacheProvider;
    options?: {
      writeOnHitFromLowerTier?: boolean; // Default: true
      ttl?: number;
    };
  }[];
  
  // Cache write strategy
  writeStrategy?: 'write-through' | 'write-behind' | 'write-around'; // Default: 'write-through'
  
  // Whether to propagate deletes to all tiers
  propagateDeletes?: boolean; // Default: true
  
  // Whether to propagate updates to all tiers
  propagateUpdates?: boolean; // Default: true
  
  // Maximum time in ms to wait for lower tier responses
  tierTimeout?: number; // Default: 5000 (5 seconds)
}

Invalidation Options

interface InvalidationOptions {
  // Invalidation method
  method: 'tag' | 'pattern' | 'version';
  
  // Tags for tag-based invalidation
  tags?: string[];
  
  // Pattern for pattern-based invalidation
  pattern?: string;
  
  // Namespace or version for version-based invalidation
  namespace?: string;
  version?: string | number;
  
  // Whether to cascade invalidation to related keys
  cascade?: boolean; // Default: false
  
  // Maximum number of keys to invalidate in one operation
  maxKeys?: number; // Default: 1000
}

Lock Options

interface LockOptions {
  // Whether to wait for the lock to be available
  wait?: boolean; // Default: false
  
  // Maximum time to wait for the lock in milliseconds
  waitTimeout?: number;
  
  // Time between retry attempts in milliseconds
  retryDelay?: number; // Default: 200
  
  // Maximum number of retry attempts
  maxRetries?: number; // Default: 10
  
  // Custom owner identifier for the lock
  owner?: string;
  
  // Additional metadata to store with the lock
  metadata?: Record<string, any>;
  
  // Whether to automatically extend the lock while held
  autoExtend?: boolean; // Default: false
  
  // How often to extend the lock in milliseconds
  autoExtendInterval?: number; // Default: 2/3 of TTL
  
  // Force acquire the lock, even if it's already held
  force?: boolean; // Default: false
}

License

MIT