JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 13
  • Score
    100M100P100Q43012F
  • License MIT

Polyfill for the SpeechRecognition browser API using AWS Transcribe

Package Exports

  • speech-recognition-aws-polyfill

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (speech-recognition-aws-polyfill) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

speech-recognition-aws-polyfill

package size vulnerabilities

A polyfill for the experimental browser Speech Recognition API which falls back to AWS Transcribe.

Features

Note: this is not a polyfill for MediaDevices.getUserMedia() - check the support table in the link above.

Who is it for?

A polyfill already exists at /antelow/speech-polyfill, which uses Azure Cognitive Services as a fallback.

This Library is a good fit if you are already using AWS services (or you would just prefer to use AWS).

The Azure version also seems to have gone stale with no updates for ~2 years so this library is perhaps also a better choice if you're looking for something a little more production ready.

Prerequisites

  • An AWS account
  • A Cognito identity pool (unauthenticated or authenticated) with the TranscribeStreaming permission.

AWS Setup Guide

  1. In the AWS console, visit the Cognito section and click Manage Identity Pools.
  2. Click Create new identity pool and give it a name.
  3. To allow anyone who visits your app to use speech recognition (e.g. for public-facing web apps) check Enable access to unauthenticated identities
  4. If you want to configure authentication instead, do so now.
  5. Click Create Pool
  6. Choose or create a role for your users. If you are just using authenticated sessions, you are only interested in the second section. If you aren't sure what to do here, the default role is fine.
  7. Make sure your role has the TranscribeStreaming policy attached. To attach this to your role search for IAM -> Roles, find your role, click "Attach policies" and search for the TranscribeStreaming role.
  8. Go back to Cognito and find your identity pool. Click Edit identity pool in the top right and make a note of your Identity pool ID

Usage

Install with npm i --save speech-recognition-aws-polyfill

Import into your application:

import SpeechRecognitionPolyfill from 'speech-recognition-aws-polyfill'

Or use from the unpkg CDN:

<script src="https://unpkg.com/speech-recognition-aws-polyfill"></script>

Create a new instance of the polyfill:

const recognition = new SpeechRecognitionPolyfill({
  IdentityPoolId: 'eu-west-1:11111111-1111-1111-1111-1111111111', // your Identity Pool ID
  region: 'eu-west-1' // your AWS region
})

You can then interact with recognition the same as you would with an instance of window.SpeechRecognition

The recognizer will stop capturing if it doesn't detect speech for a period. You can also stop manually with the stop() method.

Support Table

Properties

Property Supported
lang Yes
grammars No
continuous No
interimResults No
maxAlternatives No
serviceURI No

Methods

Method Supported
abort Yes
start Yes
stop Yes

Events

Events Supported
audiostart Yes
audioend Yes
start Yes
end Yes
error Yes
nomatch Yes
result Yes
soundstart Partial
soundend Partial
speechstart Partial
speechend Partial

Full Example

import SpeechRecognitionPolyfill from 'speech-recognition-aws-polyfill'

const recognition = new SpeechRecognitionPolyfill({
  IdentityPoolId: 'eu-west-1:11111111-1111-1111-1111-1111111111', // your Identity Pool ID
  region: 'eu-west-1' // your AWS region
})
recognition.lang = 'en-US';

document.body.onclick = function() {
  recognition.start();
  console.log('Listening');
}

recognition.onresult = function(event) {
  const { transcript } = event.results[0][0]
  console.log('Heard: ', transcript)
}

recognition.onerror = console.error

Roadmap

  • Further increase parity between the two implementations by better supporting additional options and events.
  • Build a companion polyfill for speech synthesis (TTS) using AWS Polly
  • Provide a way to output the transcription as an RxJS observable

Contributing and Bugs

Questions, comments and contributions are very welcome. Just raise an Issue/PR (or, check out the fancy new Github Discussions feature)

License

MIT