Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@andresfrei/n8n-nodes-redis-debounce) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
n8n-nodes-redis-debounce
Redis-based debounce and batching for n8n workflows
Intelligently accumulate and batch items using Redis as a buffer with automatic debounce timer reset. Perfect for batching rapid events, grouping chat messages, or rate-limiting API calls.
✨ Features
- 🔄 Smart Debouncing - Timer resets with each new item (true debounce behavior)
- 📦 Automatic Batching - Group multiple items by custom keys
- ⚡ High Performance - Atomic Redis pipelines for consistency
- 🔍 Debug Tools - Built-in scheduler inspection
- 🏷️ Namespaced Keys - No conflicts with other Redis data (
n8n:debounce:v2:*) - 🚀 Schedule-Based Processing - Use with n8n Schedule Trigger (no polling overhead)
📦 Installation
Via n8n Community Nodes (Recommended)
- In n8n: Settings → Community Nodes → Install
- Enter:
@andresfrei/n8n-nodes-redis-debounce - Click Install
From Source (Development)
git clone https://github.com/andresfrei/n8n-nodes-redis-debounce.git
cd n8n-nodes-redis-debounce
pnpm install
pnpm run build
pnpm link --global
# Link in n8n
cd ~/.n8n/custom
pnpm link --global @andresfrei/n8n-nodes-redis-debounce
# Restart n8n🚀 Quick Start
Use Case: Chat Message Batching
Problem: User sends 5 messages in 8 seconds. You want to process them as a single batch 10 seconds after the LAST message.
Solution: Use debounce with timer reset.
Workflow 1: Message Receiver (Webhook)
Webhook → Redis Debounce (Add Item)Configuration:
| Parameter | Value |
|---|---|
| Operation | Add Item |
| Key | {{ $json.userId }} |
| Item Data (JSON) | {{ $json }} |
| Debounce Seconds | 10 |
What happens:
- Message 1 arrives → Schedule processing at +10s
- Message 2 arrives (3s later) → Reset timer to +10s from now
- Message 3 arrives (5s later) → Reset timer again to +10s from now
- No new messages for 10s → Ready to process!
Workflow 2: Batch Processor (Schedule Trigger)
Schedule Trigger (every 5s)
↓
Redis Debounce (Process Ready)
↓
IF (count > 0)
↓
Split In Batches
↓
[Your Processing Logic]Node Configuration:
Schedule Trigger
- Interval:
Every 5 seconds
- Interval:
Redis Debounce
- Operation:
Process Ready - Limit:
50
- Operation:
IF
- Condition:
{{ $json.count }} > 0
- Condition:
Split In Batches (True branch)
- Items:
{{ $json.items }} - Batch Size:
1
- Items:
Your Processing
- Access items:
{{ $json.items }} - Access key:
{{ $json.key }}
- Access items:
📋 Operations
1. Add Item
Add item to debounce queue and reset timer.
Parameters:
key- Unique identifier (user_id, session_id, etc.)itemData- JSON object with item datadebounceSeconds- Wait time after last item (default: 10)
Output:
{
"success": true,
"key": "user_123",
"itemAdded": { "text": "Hello" },
"processAt": "2025-10-08T23:13:05.472Z",
"debounceSeconds": 10
}2. Process Ready
Get all ready items with data and auto-clear (all-in-one operation).
Parameters:
limit- Max keys to process (default: 100)
Output:
{
"success": true,
"count": 2,
"items": [
{
"key": "user_123",
"items": [{ "text": "msg1" }, { "text": "msg2" }],
"itemCount": 2
},
{
"key": "user_456",
"items": [{ "text": "msg3" }],
"itemCount": 1
}
]
}3. Get Ready Keys
Get list of keys ready to process (without data).
Parameters:
limit- Max keys to retrieve (default: 100)
Output:
{
"success": true,
"count": 2,
"keys": ["user_123", "user_456"]
}4. Get Items
Retrieve all accumulated items for a specific key.
Parameters:
key- Key to retrieve
Output:
{
"success": true,
"key": "user_123",
"itemCount": 3,
"items": [{ "text": "msg1" }, { "text": "msg2" }, { "text": "msg3" }]
}5. Clear Key
Remove all data for a key (items + scheduler entry).
Parameters:
key- Key to clear
Output:
{
"success": true,
"key": "user_123",
"cleared": true
}6. Debug Scheduler
View all scheduled keys with timestamps and item counts.
Output:
{
"success": true,
"namespace": "n8n:debounce:v2",
"currentTime": "2025-10-08T23:15:00.000Z",
"totalKeys": 3,
"readyKeys": 1,
"pendingKeys": 2,
"keys": [
{
"key": "user_123",
"processAt": "2025-10-08T23:13:05.472Z",
"isReady": true,
"secondsUntilReady": 0,
"itemCount": 5
}
]
}🔧 Redis Data Structure
Namespaced keys prevent collisions (n8n:debounce:v2:*):
# Scheduler (SORTED SET - score = timestamp when ready)
n8n:debounce:v2:scheduler → {
1728432010000: "user_123",
1728432020000: "user_456"
}
# Items buffer (LIST per key)
n8n:debounce:v2:items:user_123 → [
'{"text":"msg1","timestamp":"..."}',
'{"text":"msg2","timestamp":"..."}',
'{"text":"msg3","timestamp":"..."}'
]Operations per Add Item: 2 (RPUSH + ZADD)
Operations per Process Ready: 3 × N keys (LRANGE + ZREM + DEL)
🔑 Credentials
Add a Redis credential in n8n:
| Field | Value | Required |
|---|---|---|
| Host | localhost or remote host |
✅ |
| Port | 6379 (default) |
✅ |
| Password | Authentication password | ❌ |
| Database | 0-15 (default: 0) |
❌ |
Tip: Use separate databases for environments (0 = prod, 1 = dev).
⚡ Performance
- Throughput: 1000+ ops/sec (local Redis)
- Latency: <5ms per operation (pipelined)
- Memory: ~500 bytes per item + key overhead
- Scalability: Horizontal via Redis Cluster
Best Practices:
- Schedule interval ≤ debounce / 2 (e.g., 5s poll for 10s debounce)
- Process batch size: 50-100 keys per cycle
- Use
Process Readyfor atomic get+clear operations - Monitor with
Debug Schedulerduring development
🆚 Version 2.0 Breaking Changes
Removed:
- ❌
RedisDebounceTriggernode (use Schedule Trigger + Process Ready instead) - ❌ Metadata hash storage (simplified data model)
Added:
- ✅
Debug Scheduleroperation - ✅ Namespaced keys (
n8n:debounce:v2:*) - ✅ Improved performance (fewer Redis operations)
Migration from v1:
- Replace trigger workflows with Schedule Trigger + Process Ready
- Old data in Redis won't conflict (different namespace)
🤝 Contributing
- Fork the repo
- Create feature branch:
git checkout -b feature/name - Commit:
git commit -m 'Add feature' - Push:
git push origin feature/name - Open Pull Request
📄 License
MIT © Andres Frei
💬 Support
- 🐛 Issues: GitHub Issues
- 💡 Discussions: GitHub Discussions
- 📧 Email: andresfrei@gmail.com
🌟 Roadmap
- Output mode for Process Ready (separate items vs single batch)
- TTL-based auto-cleanup for abandoned keys
- Batch size limits (max items per key)
- Redis Cluster support
- Metrics export (Prometheus)
Star the repo if this helped! ⭐
✨ Features
Advanced Redis operations optimized for message accumulation and debouncing patterns:
🎯 Core Operations
1. Accumulate Message
Buffers messages and schedules batch processing after a configurable quiet period.
Use cases:
- Chat message batching (group rapid-fire messages)
- Webhook deduplication (merge similar events)
- API rate limiting (batch requests to external services)
How it works:
- Appends message to Redis list (
messages:{conversationId}) - Stores metadata in Redis hash (
conversation:{conversationId}) - Schedules processing time in sorted set (
scheduler) - Each new message resets the debounce timer
Parameters:
conversationId- Unique identifier (user_id, chat_id, etc.)message- Message content to accumulatedebounceSeconds- Seconds to wait after last message (default: 10)metadata- Optional JSON object with additional data
2. Get Ready Conversations
Polls Redis for conversations whose debounce window has expired.
Use case: Cron/polling workflow to retrieve batches ready for processing.
Returns:
{
"conversations": ["user_123", "chat_456"],
"count": 2
}Parameters:
limit- Max batches to retrieve at once (default: 100)
3. Get Messages
Fetches all accumulated messages and metadata for a specific conversation.
Use case: Process the full batch after identifying ready conversations.
Returns:
{
"messages": ["Hello", "How are you?", "Still there?"],
"metadata": { "userId": "123", "channel": "whatsapp" },
"messageCount": 3
}4. Clear Conversation
Deletes all conversation data (messages + metadata + scheduler entry).
Use case: Cleanup after successful batch processing to prevent memory leaks.
5. Custom Command
Execute raw Redis commands for advanced use cases.
Examples:
INCR page_views- Increment counterEXPIRE session:abc 3600- Set TTLHGETALL user:123- Get hash fields
📦 Installation
Via n8n Community Nodes (Recommended)
- In n8n: Settings → Community Nodes → Install
- Enter:
@andresfrei/n8n-nodes-redis-debounce - Click Install
From Source (Development)
# Clone repository
git clone https://github.com/andresfrei/n8n-nodes-redis-debounce.git
cd n8n-nodes-redis-debounce
# Install dependencies
pnpm install
# Build & link
pnpm run build
pnpm link --global
# Link in n8n
cd ~/.n8n/custom
pnpm link --global @andresfrei/n8n-nodes-redis-debounce
# Restart n8n🚀 Quick Start
Use Case: Chat Message Batching
Problem: Users send 5 messages in 3 seconds. Processing each triggers 5 API calls.
Solution: Batch messages when user pauses (10s silence), then process once.
Workflow 1: Message Receiver (Webhook)
Webhook → Redis Debounce (Accumulate)| Parameter | Value |
|---|---|
| Operation | Accumulate Message |
| Conversation ID | {{ $json.userId }} |
| Message | {{ $json.messageText }} |
| Debounce Seconds | 10 |
| Metadata | { "userId": "{{ $json.userId }}", "channel": "whatsapp" } |
Workflow 2: Batch Processor (Polling)
Schedule (every 5s) → Get Ready → IF (count > 0) → Loop → Get Messages → Process → ClearNode Configuration:
- Schedule Trigger - Interval:
5 seconds - Redis Debounce - Operation:
Get Ready Conversations, Limit:50 - IF - Condition:
{{ $json.count > 0 }} - Split In Batches - Batch Size:
1 - Redis Debounce - Operation:
Get Messages, ID:{{ $json.conversationId }} - Your Processing - Use
{{ $json.messages }}array - Redis Debounce - Operation:
Clear Conversation, ID:{{ $json.conversationId }}
🔧 Redis Data Structure
Efficient schema optimized for high-throughput debouncing:
# Message buffer (LIST)
messages:user_123 → ["Hello", "How are you?", "Still there?"]
# Metadata (HASH)
conversation:user_123 → {
"userId": "123",
"channel": "whatsapp",
"last_message_time": "1728403200",
"debounce_seconds": "10"
}
# Scheduler (SORTED SET - score = processing timestamp)
scheduler → {
1728403210: "user_123",
1728403215: "chat_456"
}Key Features:
- Atomic operations via Redis pipelines
- O(log N) scheduler lookups (sorted set)
- Automatic connection pooling (ioredis)
- Memory-efficient list storage
🔑 Credentials
Add a Redis credential in n8n:
| Field | Value | Required |
|---|---|---|
| Host | localhost or remote host |
✅ |
| Port | 6379 (default) |
✅ |
| Password | Authentication password | ❌ |
| Database | 0-15 (default: 0) |
❌ |
Tip: Use separate databases for dev/prod (0 = prod, 1 = dev).
⚡ Performance
- Throughput: Handles 1000+ messages/sec (local Redis)
- Latency: <5ms per operation (pipelined)
- Memory: ~1KB per conversation (avg 10 messages)
- Scalability: Horizontal via Redis Cluster/Sentinel
Best Practices:
- Poll interval ≤ debounce window / 2 (e.g., 5s poll for 10s debounce)
- Batch size: 50-100 conversations per poll
- Clear conversations after processing (prevent memory leaks)
- Use Redis persistence (AOF/RDB) for durability
🤝 Contributing
Found a bug or have an idea? Contributions welcome!
- Fork the repo
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push:
git push origin feature/amazing-feature - Open a Pull Request
📄 License
MIT © Andres Frei
💬 Support
- 🐛 Issues: GitHub Issues
- � Discussions: GitHub Discussions
- 📧 Email: andresfrei@gmail.com
🌟 Roadmap
- TTL-based auto-cleanup for abandoned conversations
- Prometheus metrics export
- Redis Cluster support
- Batch size limits (prevent OOM on large accumulations)
- TypeScript type definitions for metadata
Star the repo if this helped your workflow! ⭐