Package Exports
- turen
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (turen) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
node-turen
The Node.js client for TurenCore.
Installation
$ npm install turen --saveGet Started
var options = {
host: 'apigwws.open.rokid.com',
port: 443,
key: 'rokid openplatform key',
secret: 'rokid openplatform secret',
deviceTypeId: 'rokid device type id',
deviceId: 'rokid device id',
};
var speech = new TurenSpeech();
speech.on('voice coming', (event) => {
// voice coming
});
speech.on('voice accept', (event) => {
// voice accept
});
speech.on('asr end', (asr, event) => {
// asr
});
speech.on('nlp', (response, event) => {
// response.asr
// response.nlp
// response.action
});
speech.on('disconnect', (socketType/* event or rpc */) => {
// got if some event is disconnected
});
speech.start(options);Services
TurenCore provides multiple socket-based services for different functionalities.
RPC
Rpc service is used to call method of TurenCore, which includes:
Restart()RestartWithoutArgs()Pickup()IsPickup()OpenMic()SetStack()SetSkillOption()
CMD
Cmd service is used to call debug method which includes:
openMic()closeMic()pickup()reset()readyForAsr()setAngle(deg)
Event
Event service is to be notified for all voice and nlp events, includes:
voice comingreturns the speechenergyand directionslwhen triggered locally.voice local sleepwhen sleep locally.asr beginwhen cloud speech recognition begins.asr endreturns the final valueasrwhen cloud speech recognition ends.nlpreturns thenlpandactionwhen NLP is done.
A complete events list is at here.
Audio
Audio service is also to debug with TurenCore, we could pull audio streams from every stage like AEC/BF/VAD.
You could use it as a socket handle:
var audioStream = new TurenAudio('mic_in');
audioStream.on('data', (chunk) => {
// got data
});
audioStream.connect();Streaming API also could use used:
var writable = fs.createWriteStream('/path/to/your/file');
writable.pipe(audioStream);Push audio data to TurenCore:
var audioWritable = new TurenAudio('mic_out');
await audioWritable.connect();
// write a buffer
audioWritable.send(new Buffer(1024));
// write a readable stream
audioWritable.send(fs.createReadStream('/path/to/your/pcm/file'));The available type values of TurenAudio is:
mic_inas aReadablestream to pull the raw data, its format depends on your micphone configuration.bf_outas aReadablestream to pull the pcm after BF.bf4_outas aReadablestream to pull the pcm after BF with selected 4 channels.bf12_outas aReadablestream to pull the pcm after BF with the full 12 channels.aec_outas aReadablestream to pull the pcm after AEC.codec_inas aReadablestream to pull the pcm before opu codec.speech_inas aReadablestream to pull the opu before uploading to cloud speech service.mic_outas aWritablestream to push raw data.
The following types are available only for our CTC model:
ctc.line_0as aReadablestream to pull the pcm to the CTC line 0.ctc.line_1as aReadablestream to pull the pcm to the CTC line 1.ctc.line_2as aReadablestream to pull the pcm to the CTC line 2.ctc.line_3as aReadablestream to pull the pcm to the CTC line 3.
Logger
You also could access to TurenCore's logs by TurenLogger:
var TurenLogger = require('turen').client.TurenLogger;
var logger = new TurenLogger();
logger.on('data', (data) => {
// receives the data
});
logger.connect();License
MIT