Package Exports
- murmuraba
- murmuraba/package.json
- murmuraba/rnnoise.wasm
Readme
π Murmuraba - Advanced Audio Chunking & Noise Reduction
Murmuraba provides real-time audio noise reduction with advanced chunked processing for web applications. The new processFileWithMetrics API delivers chunks as ready-to-use blobs with integrated VAD metrics, eliminating client-side complexity.
π Key Features
- π― Advanced File Chunking: Process audio files into configurable chunks with automatic format conversion
- π Integrated VAD Metrics: Voice Activity Detection metrics calculated per chunk
- π Format Flexibility: Convert chunks to WAV, WebM, or raw formats automatically
- β‘ Zero Client Complexity: Chunks come as ready-to-use blobs with timestamps
- ποΈ Real-time Noise Reduction: RNNoise neural network with WebAssembly
- π‘οΈ Full TypeScript Support: Complete type safety with strict interfaces
Installation
npm install murmuraba
# or
yarn add murmuraba
# or
pnpm add murmurabaπ― New Chunking API - processFileWithMetrics
The processFileWithMetrics function is the core of Murmuraba's new chunking system. It processes audio files and returns ready-to-use chunks with integrated metrics.
Basic Usage
import { processFileWithMetrics } from 'murmuraba';
// Process file with default 8-second WAV chunks
const result = await processFileWithMetrics(audioFile);
console.log(result.chunks.length); // Number of chunks created
result.chunks.forEach((chunk, index) => {
console.log(`Chunk ${index + 1}:`);
console.log(`- Duration: ${chunk.duration}ms`);
console.log(`- Format: ${chunk.format}`);
console.log(`- VAD Score: ${chunk.vadMetrics.voiceActivityScore}`);
console.log(`- Ready to use: ${chunk.blob.size} bytes`);
});Advanced Configuration
import { processFileWithMetrics, ChunkOptions } from 'murmuraba';
const options: ChunkOptions = {
duration: 10, // 10-second chunks
format: 'webm', // Output as WebM
quality: 'high' // High quality processing
};
const result = await processFileWithMetrics(audioFile, options);
// Each chunk is ready to use immediately
result.chunks.forEach(chunk => {
// Create audio element
const audio = new Audio(URL.createObjectURL(chunk.blob));
// Access integrated metrics
console.log('Voice detected:', chunk.vadMetrics.voiceDetected);
console.log('Noise level:', chunk.vadMetrics.noiseLevel);
console.log('Signal quality:', chunk.vadMetrics.signalQuality);
});π§ TypeScript Interfaces
Core Interfaces
// Chunk configuration options
interface ChunkOptions {
duration?: number; // Chunk duration in seconds (default: 8)
format?: 'wav' | 'webm' | 'raw'; // Output format (default: 'wav')
quality?: 'low' | 'medium' | 'high'; // Processing quality (default: 'medium')
}
// Processed chunk with ready-to-use blob
interface ProcessedChunk {
id: string; // Unique chunk identifier
blob: Blob; // Ready-to-use audio blob
duration: number; // Duration in milliseconds
format: string; // Audio format (wav/webm/raw)
startTime: number; // Start timestamp (ms)
endTime: number; // End timestamp (ms)
vadMetrics: {
voiceActivityScore: number; // 0-1 voice activity score
voiceDetected: boolean; // Voice presence detection
noiseLevel: number; // 0-1 noise level
signalQuality: number; // 0-1 signal quality
energyLevel: number; // Audio energy level
};
}
// Complete processing result
interface ProcessFileResult {
chunks: ProcessedChunk[]; // Array of processed chunks
totalDuration: number; // Total file duration (ms)
originalFormat: string; // Original file format
processingTime: number; // Processing time (ms)
metadata: {
sampleRate: number; // Audio sample rate
channels: number; // Number of channels
bitDepth?: number; // Bit depth if available
};
}
// Unified options interface
interface ProcessFileOptions {
chunk?: ChunkOptions; // Chunking configuration
noiseReduction?: boolean; // Enable noise reduction (default: true)
vadEnabled?: boolean; // Enable VAD analysis (default: true)
}Function Signatures
// Main chunking function
function processFileWithMetrics(
file: File,
options?: ProcessFileOptions
): Promise<ProcessFileResult>;
// Backward compatibility overload
function processFileWithMetrics(
file: File,
chunkOptions: ChunkOptions
): Promise<ProcessFileResult>;π΅ Complete Example with Playback
import { processFileWithMetrics } from 'murmuraba';
async function processAudioFile(file: File) {
try {
// Process with custom options
const result = await processFileWithMetrics(file, {
chunk: {
duration: 6, // 6-second chunks
format: 'wav', // WAV output
quality: 'high' // High quality
},
noiseReduction: true,
vadEnabled: true
});
console.log(`β
Processed ${result.chunks.length} chunks`);
console.log(`π Total duration: ${result.totalDuration / 1000}s`);
console.log(`β‘ Processing time: ${result.processingTime}ms`);
// Use chunks immediately
result.chunks.forEach((chunk, index) => {
console.log(`\nπ΅ Chunk ${index + 1}:`);
console.log(` Duration: ${chunk.duration / 1000}s`);
console.log(` Voice Activity: ${(chunk.vadMetrics.voiceActivityScore * 100).toFixed(1)}%`);
console.log(` Voice Detected: ${chunk.vadMetrics.voiceDetected ? 'β
' : 'β'}`);
console.log(` Noise Level: ${(chunk.vadMetrics.noiseLevel * 100).toFixed(1)}%`);
console.log(` Signal Quality: ${(chunk.vadMetrics.signalQuality * 100).toFixed(1)}%`);
// Create playable URL immediately
const audioUrl = URL.createObjectURL(chunk.blob);
console.log(` π Ready to play: ${audioUrl.substring(0, 50)}...`);
// Cleanup URL when done (optional)
// URL.revokeObjectURL(audioUrl);
});
} catch (error) {
console.error('β Processing failed:', error);
}
}
// Use with file input
const fileInput = document.getElementById('audioFile') as HTMLInputElement;
fileInput.addEventListener('change', (event) => {
const file = (event.target as HTMLInputElement).files?.[0];
if (file) {
processAudioFile(file);
}
});π React Integration Example
import React, { useState, useCallback } from 'react';
import { processFileWithMetrics, ProcessedChunk } from 'murmuraba';
function AudioChunkProcessor() {
const [chunks, setChunks] = useState<ProcessedChunk[]>([]);
const [processing, setProcessing] = useState(false);
const [currentPlaying, setCurrentPlaying] = useState<string | null>(null);
const handleFileUpload = useCallback(async (file: File) => {
setProcessing(true);
try {
const result = await processFileWithMetrics(file, {
chunk: { duration: 8, format: 'wav' },
noiseReduction: true,
vadEnabled: true
});
setChunks(result.chunks);
console.log(`β
Created ${result.chunks.length} chunks`);
} catch (error) {
console.error('Processing failed:', error);
} finally {
setProcessing(false);
}
}, []);
const playChunk = useCallback((chunk: ProcessedChunk) => {
// Stop any currently playing audio
if (currentPlaying) {
const currentAudio = document.getElementById(currentPlaying) as HTMLAudioElement;
currentAudio?.pause();
}
// Create and play new audio
const audioUrl = URL.createObjectURL(chunk.blob);
const audio = new Audio(audioUrl);
const audioId = `audio-${chunk.id}`;
audio.id = audioId;
audio.onended = () => {
setCurrentPlaying(null);
URL.revokeObjectURL(audioUrl);
};
setCurrentPlaying(audioId);
audio.play();
}, [currentPlaying]);
return (
<div className="audio-processor">
<input
type="file"
accept="audio/*"
onChange={(e) => {
const file = e.target.files?.[0];
if (file) handleFileUpload(file);
}}
disabled={processing}
/>
{processing && <div>π Processing audio file...</div>}
<div className="chunks-grid">
{chunks.map((chunk, index) => (
<div key={chunk.id} className="chunk-card">
<h3>Chunk #{index + 1}</h3>
<p>Duration: {(chunk.duration / 1000).toFixed(1)}s</p>
<div className="metrics">
<div>π€ Voice: {chunk.vadMetrics.voiceDetected ? 'β
' : 'β'}</div>
<div>π Activity: {(chunk.vadMetrics.voiceActivityScore * 100).toFixed(1)}%</div>
<div>π Noise: {(chunk.vadMetrics.noiseLevel * 100).toFixed(1)}%</div>
<div>β Quality: {(chunk.vadMetrics.signalQuality * 100).toFixed(1)}%</div>
</div>
<button
onClick={() => playChunk(chunk)}
disabled={currentPlaying === `audio-${chunk.id}`}
>
{currentPlaying === `audio-${chunk.id}` ? 'βΈοΈ Playing' : 'βΆοΈ Play'}
</button>
</div>
))}
</div>
</div>
);
}
export default AudioChunkProcessor;π― Key Benefits
β Before: Manual Client-Side Complexity
// Old way - complex client setup
const chunks = await processFile(file);
for (const chunk of chunks) {
const audioBuffer = await convertToWAV(chunk.rawData); // Manual conversion
const blob = new Blob([audioBuffer], { type: 'audio/wav' }); // Manual blob creation
const vadScore = await calculateVAD(chunk.rawData); // Separate VAD calculation
const url = URL.createObjectURL(blob); // Manual URL creation
}β After: Zero Client Complexity
// New way - everything ready-to-use
const result = await processFileWithMetrics(file);
result.chunks.forEach(chunk => {
// chunk.blob is ready to use immediately
// chunk.vadMetrics are pre-calculated
// No manual conversions needed
const audioUrl = URL.createObjectURL(chunk.blob);
// Just play!
});π‘οΈ Backward Compatibility
The API maintains full backward compatibility while providing new functionality:
// Old API still works
const result = await processFileWithMetrics(file, { duration: 10 });
// New unified API
const result = await processFileWithMetrics(file, {
chunk: { duration: 10, format: 'wav' },
noiseReduction: true,
vadEnabled: true
});π§ Browser Requirements
- Web Audio API support
- WebAssembly support
- Modern browsers: Chrome 66+, Firefox 60+, Safari 11.1+, Edge 79+
π Additional Resources
License
MIT Β© Murmuraba Team
Ready to eliminate client-side audio complexity?
Install Murmuraba and get chunks as ready-to-use blobs with integrated VAD metrics! π