JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 18
  • Score
    100M100P100Q54463F
  • License MIT

Real-time audio noise reduction with advanced chunked processing for web applications

Package Exports

  • murmuraba
  • murmuraba/package.json
  • murmuraba/rnnoise.wasm

Readme

πŸš€ Murmuraba - Advanced Audio Chunking & Noise Reduction

npm version License: MIT

Murmuraba provides real-time audio noise reduction with advanced chunked processing for web applications. The new processFileWithMetrics API delivers chunks as ready-to-use blobs with integrated VAD metrics, eliminating client-side complexity.

🌟 Key Features

  • 🎯 Advanced File Chunking: Process audio files into configurable chunks with automatic format conversion
  • πŸ“Š Integrated VAD Metrics: Voice Activity Detection metrics calculated per chunk
  • πŸ”„ Format Flexibility: Convert chunks to WAV, WebM, or raw formats automatically
  • ⚑ Zero Client Complexity: Chunks come as ready-to-use blobs with timestamps
  • πŸŽ™οΈ Real-time Noise Reduction: RNNoise neural network with WebAssembly
  • πŸ›‘οΈ Full TypeScript Support: Complete type safety with strict interfaces

Installation

npm install murmuraba
# or
yarn add murmuraba
# or  
pnpm add murmuraba

🎯 New Chunking API - processFileWithMetrics

The processFileWithMetrics function is the core of Murmuraba's new chunking system. It processes audio files and returns ready-to-use chunks with integrated metrics.

Basic Usage

import { processFileWithMetrics } from 'murmuraba';

// Process file with default 8-second WAV chunks
const result = await processFileWithMetrics(audioFile);

console.log(result.chunks.length); // Number of chunks created
result.chunks.forEach((chunk, index) => {
  console.log(`Chunk ${index + 1}:`);
  console.log(`- Duration: ${chunk.duration}ms`);
  console.log(`- Format: ${chunk.format}`);
  console.log(`- VAD Score: ${chunk.vadMetrics.voiceActivityScore}`);
  console.log(`- Ready to use: ${chunk.blob.size} bytes`);
});

Advanced Configuration

import { processFileWithMetrics, ChunkOptions } from 'murmuraba';

const options: ChunkOptions = {
  duration: 10,        // 10-second chunks
  format: 'webm',      // Output as WebM
  quality: 'high'      // High quality processing
};

const result = await processFileWithMetrics(audioFile, options);

// Each chunk is ready to use immediately
result.chunks.forEach(chunk => {
  // Create audio element
  const audio = new Audio(URL.createObjectURL(chunk.blob));
  
  // Access integrated metrics
  console.log('Voice detected:', chunk.vadMetrics.voiceDetected);
  console.log('Noise level:', chunk.vadMetrics.noiseLevel);
  console.log('Signal quality:', chunk.vadMetrics.signalQuality);
});

πŸ”§ TypeScript Interfaces

Core Interfaces

// Chunk configuration options
interface ChunkOptions {
  duration?: number;           // Chunk duration in seconds (default: 8)
  format?: 'wav' | 'webm' | 'raw';  // Output format (default: 'wav')
  quality?: 'low' | 'medium' | 'high';  // Processing quality (default: 'medium')
}

// Processed chunk with ready-to-use blob
interface ProcessedChunk {
  id: string;                  // Unique chunk identifier
  blob: Blob;                  // Ready-to-use audio blob
  duration: number;            // Duration in milliseconds
  format: string;              // Audio format (wav/webm/raw)
  startTime: number;           // Start timestamp (ms)
  endTime: number;             // End timestamp (ms)
  vadMetrics: {
    voiceActivityScore: number;     // 0-1 voice activity score
    voiceDetected: boolean;         // Voice presence detection
    noiseLevel: number;             // 0-1 noise level
    signalQuality: number;          // 0-1 signal quality
    energyLevel: number;            // Audio energy level
  };
}

// Complete processing result
interface ProcessFileResult {
  chunks: ProcessedChunk[];    // Array of processed chunks
  totalDuration: number;       // Total file duration (ms)
  originalFormat: string;      // Original file format
  processingTime: number;      // Processing time (ms)
  metadata: {
    sampleRate: number;        // Audio sample rate
    channels: number;          // Number of channels
    bitDepth?: number;         // Bit depth if available
  };
}

// Unified options interface
interface ProcessFileOptions {
  chunk?: ChunkOptions;        // Chunking configuration
  noiseReduction?: boolean;    // Enable noise reduction (default: true)
  vadEnabled?: boolean;        // Enable VAD analysis (default: true)
}

Function Signatures

// Main chunking function
function processFileWithMetrics(
  file: File,
  options?: ProcessFileOptions
): Promise<ProcessFileResult>;

// Backward compatibility overload
function processFileWithMetrics(
  file: File,
  chunkOptions: ChunkOptions
): Promise<ProcessFileResult>;

🎡 Complete Example with Playback

import { processFileWithMetrics } from 'murmuraba';

async function processAudioFile(file: File) {
  try {
    // Process with custom options
    const result = await processFileWithMetrics(file, {
      chunk: {
        duration: 6,      // 6-second chunks
        format: 'wav',    // WAV output
        quality: 'high'   // High quality
      },
      noiseReduction: true,
      vadEnabled: true
    });

    console.log(`βœ… Processed ${result.chunks.length} chunks`);
    console.log(`πŸ“Š Total duration: ${result.totalDuration / 1000}s`);
    console.log(`⚑ Processing time: ${result.processingTime}ms`);

    // Use chunks immediately
    result.chunks.forEach((chunk, index) => {
      console.log(`\n🎡 Chunk ${index + 1}:`);
      console.log(`   Duration: ${chunk.duration / 1000}s`);
      console.log(`   Voice Activity: ${(chunk.vadMetrics.voiceActivityScore * 100).toFixed(1)}%`);
      console.log(`   Voice Detected: ${chunk.vadMetrics.voiceDetected ? 'βœ…' : '❌'}`);
      console.log(`   Noise Level: ${(chunk.vadMetrics.noiseLevel * 100).toFixed(1)}%`);
      console.log(`   Signal Quality: ${(chunk.vadMetrics.signalQuality * 100).toFixed(1)}%`);
      
      // Create playable URL immediately
      const audioUrl = URL.createObjectURL(chunk.blob);
      console.log(`   πŸ”— Ready to play: ${audioUrl.substring(0, 50)}...`);
      
      // Cleanup URL when done (optional)
      // URL.revokeObjectURL(audioUrl);
    });

  } catch (error) {
    console.error('❌ Processing failed:', error);
  }
}

// Use with file input
const fileInput = document.getElementById('audioFile') as HTMLInputElement;
fileInput.addEventListener('change', (event) => {
  const file = (event.target as HTMLInputElement).files?.[0];
  if (file) {
    processAudioFile(file);
  }
});

πŸ”„ React Integration Example

import React, { useState, useCallback } from 'react';
import { processFileWithMetrics, ProcessedChunk } from 'murmuraba';

function AudioChunkProcessor() {
  const [chunks, setChunks] = useState<ProcessedChunk[]>([]);
  const [processing, setProcessing] = useState(false);
  const [currentPlaying, setCurrentPlaying] = useState<string | null>(null);

  const handleFileUpload = useCallback(async (file: File) => {
    setProcessing(true);
    try {
      const result = await processFileWithMetrics(file, {
        chunk: { duration: 8, format: 'wav' },
        noiseReduction: true,
        vadEnabled: true
      });
      
      setChunks(result.chunks);
      console.log(`βœ… Created ${result.chunks.length} chunks`);
    } catch (error) {
      console.error('Processing failed:', error);
    } finally {
      setProcessing(false);
    }
  }, []);

  const playChunk = useCallback((chunk: ProcessedChunk) => {
    // Stop any currently playing audio
    if (currentPlaying) {
      const currentAudio = document.getElementById(currentPlaying) as HTMLAudioElement;
      currentAudio?.pause();
    }

    // Create and play new audio
    const audioUrl = URL.createObjectURL(chunk.blob);
    const audio = new Audio(audioUrl);
    const audioId = `audio-${chunk.id}`;
    
    audio.id = audioId;
    audio.onended = () => {
      setCurrentPlaying(null);
      URL.revokeObjectURL(audioUrl);
    };
    
    setCurrentPlaying(audioId);
    audio.play();
  }, [currentPlaying]);

  return (
    <div className="audio-processor">
      <input
        type="file"
        accept="audio/*"
        onChange={(e) => {
          const file = e.target.files?.[0];
          if (file) handleFileUpload(file);
        }}
        disabled={processing}
      />
      
      {processing && <div>πŸ”„ Processing audio file...</div>}
      
      <div className="chunks-grid">
        {chunks.map((chunk, index) => (
          <div key={chunk.id} className="chunk-card">
            <h3>Chunk #{index + 1}</h3>
            <p>Duration: {(chunk.duration / 1000).toFixed(1)}s</p>
            <div className="metrics">
              <div>🎀 Voice: {chunk.vadMetrics.voiceDetected ? 'βœ…' : '❌'}</div>
              <div>πŸ“Š Activity: {(chunk.vadMetrics.voiceActivityScore * 100).toFixed(1)}%</div>
              <div>πŸ”‡ Noise: {(chunk.vadMetrics.noiseLevel * 100).toFixed(1)}%</div>
              <div>⭐ Quality: {(chunk.vadMetrics.signalQuality * 100).toFixed(1)}%</div>
            </div>
            <button 
              onClick={() => playChunk(chunk)}
              disabled={currentPlaying === `audio-${chunk.id}`}
            >
              {currentPlaying === `audio-${chunk.id}` ? '⏸️ Playing' : '▢️ Play'}
            </button>
          </div>
        ))}
      </div>
    </div>
  );
}

export default AudioChunkProcessor;

🎯 Key Benefits

βœ… Before: Manual Client-Side Complexity

// Old way - complex client setup
const chunks = await processFile(file);
for (const chunk of chunks) {
  const audioBuffer = await convertToWAV(chunk.rawData);  // Manual conversion
  const blob = new Blob([audioBuffer], { type: 'audio/wav' });  // Manual blob creation
  const vadScore = await calculateVAD(chunk.rawData);  // Separate VAD calculation
  const url = URL.createObjectURL(blob);  // Manual URL creation
}

βœ… After: Zero Client Complexity

// New way - everything ready-to-use
const result = await processFileWithMetrics(file);
result.chunks.forEach(chunk => {
  // chunk.blob is ready to use immediately
  // chunk.vadMetrics are pre-calculated
  // No manual conversions needed
  const audioUrl = URL.createObjectURL(chunk.blob);
  // Just play!
});

πŸ›‘οΈ Backward Compatibility

The API maintains full backward compatibility while providing new functionality:

// Old API still works
const result = await processFileWithMetrics(file, { duration: 10 });

// New unified API
const result = await processFileWithMetrics(file, {
  chunk: { duration: 10, format: 'wav' },
  noiseReduction: true,
  vadEnabled: true
});

πŸ”§ Browser Requirements

  • Web Audio API support
  • WebAssembly support
  • Modern browsers: Chrome 66+, Firefox 60+, Safari 11.1+, Edge 79+

πŸ“š Additional Resources

License

MIT Β© Murmuraba Team


Ready to eliminate client-side audio complexity?
Install Murmuraba and get chunks as ready-to-use blobs with integrated VAD metrics! πŸš€