JSPM

  • Created
  • Published
  • Downloads 2542
  • Score
    100M100P100Q134009F
  • License Apache-2.0

Real-time audio processing for voice, in web browsers

Package Exports

  • @picovoice/web-voice-processor
  • @picovoice/web-voice-processor/dist/esm/index.js

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@picovoice/web-voice-processor) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Web Voice Processor

GitHub release

Made in Vancouver, Canada by Picovoice

A library for real-time voice processing in web browsers.

Browser compatibility

All modern browsers (Chrome/Edge/Opera, Firefox, Safari) are supported, including on mobile. Internet Explorer is not supported.

Using the Web Audio API requires a secure context (HTTPS connection), with the exception of localhost, for local development.

This library includes the utility function browserCompatibilityCheck which can be used to perform feature detection on the current browser and return an object indicating browser capabilities.

ESM:

import { browserCompatibilityCheck } from '@picovoice/web-voice-processor';
browserCompatibilityCheck();

IIFE:

window.WebVoiceProcessor.browserCompatibilityCheck();

Browser features

  • '_picovoice' : whether all Picovoice requirements are met
  • 'AudioWorklet' (not currently used; intended for the future)
  • 'isSecureContext' (required for microphone permission for non-localhost)
  • 'mediaDevices' (basis for microphone enumeration / access)
  • 'WebAssembly' (required for all Picovoice engines)
  • 'webKitGetUserMedia' (legacy predecessor to getUserMedia)
  • 'Worker' (required for downsampler and for all engine processing)

Installation

npm install @picovoice/web-voice-processor

(or)

yarn add @picovoice/web-voice-processor

How to use

Via ES Modules (Create React App, Angular, Webpack, etc.)

import { WebVoiceProcessor } from '@picovoice/web-voice-processor';

Via HTML script tag

Add the following to your HTML:

<script src="@picovoice/web-voice-processor/dist/iife/index.js"></script>

The IIFE version of the library adds WebVoiceProcessor to the window global scope.

Start listening

Get the WebVoiceProcessor with the instance async static method. This will return the singleton instance:

let options = {
  frameLength: 512,
  outputSampleRate: 16000,
  deviceId: null,
  filterOrder: 50,
  vuMeterCallback: undefined,
}; // optional options

let handle = await WebVoiceProcessor.WebVoiceProcessor.instance(options);

WebVoiceProcessor follows the subscribe/unsubscribe pattern. Every engine that is subscribed will be receiving audio frames as soon as it is ready:

const worker = new Worker('${WORKER_PATH}');
const engine = {
  onmessage: function(e) {
    /// ... handle inputFrame
  }
}

handle.subscribe(engine);
handle.subscribe(worker);

handle.unsubscribe(engine);
handle.unsubscribe(worker);

An engine is either a Web Workers or an object implementing the following interface within their onmessage method:

onmessage = function (e) {
    switch (e.data.command) {
        case 'process':
            process(e.data.inputFrame);
            break;
    }
};

where e.data.inputFrame is an Int16Array of frameLength audio samples.

For examples of using engines, look at src/engines.

To start recording, call start after getting the instance. This will start the Audio Context, get microphone permissions and start recording.

await handle.start();

This is async due to its Web Audio API microphone request. The promise will be rejected if the user refuses permission, no suitable devices are found, etc. Your calling code should anticipate the possibility of rejection. When the promise resolves, the WebVoiceProcessor is running.

Pause listening

Pause processing (microphone and Web Audio context will still be active):

await handle.pause();

Stop Listening

Close the microphone MediaStream. This will free all used resources including microphone's resources and audio context.

await handle.stop();

Options

To update the audio settings in WebVoiceProcessor, call the instance function with new options. Then call stop, and start so it can start recording audio with the new settings. This step is required since the audioContext has to be recreated to reflect the changes.

VuMeter

WebVoiceProcessor includes a built-in engine which returns the VU meter. To capture the VU meter value, create a callback and pass it in the options parameter:

function vuMeterCallback(dB) {
  console.log(dB)
}

const handle = await window.WebVoiceProcessor.WebVoiceProcessor.instance({vuMeterCallback});

The vuMeterCallback should expected a number in terms of dBFS within the range of [-96, 0].

Build from source

Use yarn or npm to build WebVoiceProcessor:

yarn
yarn build

(or)

npm install
npm run-script build

The build script outputs minified and non-minified versions of the IIFE and ESM formats to the dist folder. It also will output the TypeScript type definitions.