Package Exports
- expo-streamer
- expo-streamer/build/index.js
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (expo-streamer) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
๐๏ธ expo-streamer
Enterprise-grade audio streaming and recording for Expo applications
Zero-crash reliability โข Full TypeScript support โข SOLID architecture โข Production ready
โจ Why Choose expo-streamer?
Feature | Description |
---|---|
๐ TypeScript First | Full TypeScript support with comprehensive type definitions |
โก Thread Safe | Proper synchronization for multi-threaded audio operations |
๐๏ธ SOLID Architecture | Clean, maintainable code following industry best practices |
๐งช Fully Tested | 100% test coverage with comprehensive test suites |
๐ฑ Cross Platform | Works seamlessly on iOS and Android |
๐๏ธ Real-time | Low-latency audio streaming perfect for voice apps |
๐ Installation
npm install expo-streamer
# or
yarn add expo-streamer
๐ TypeScript Usage
Basic Recording and Playback
import {
ExpoStreamer,
RecordingConfig,
AudioDataEvent,
RecordingEncodingTypes,
SampleRates
} from 'expo-streamer';
// Define recording configuration with full TypeScript support
const recordingConfig: RecordingConfig = {
sampleRate: SampleRates.SR_44100,
channels: 1,
encoding: RecordingEncodingTypes.PCM_16BIT,
interval: 250,
onAudioStream: (event: AudioDataEvent) => {
console.log('Audio data received:', {
data: event.data,
position: event.position,
soundLevel: event.soundLevel
});
}
};
// Start recording with type safety
const { recordingResult, subscription } = await ExpoStreamer.startRecording(recordingConfig);
// Play audio with proper typing
await ExpoStreamer.playAudio(base64AudioData, 'turn-1', EncodingTypes.PCM_S16LE);
// Stop recording
const recording = await ExpoStreamer.stopRecording();
Advanced Configuration with Types
import {
ExpoStreamer,
SoundConfig,
PlaybackModes,
SampleRates,
EncodingTypes
} from 'expo-streamer';
// Configure audio playback with type safety
const soundConfig: SoundConfig = {
sampleRate: SampleRates.SR_44100,
playbackMode: PlaybackModes.VOICE_PROCESSING,
enableBuffering: true,
bufferConfig: {
targetBufferMs: 100,
maxBufferMs: 500,
minBufferMs: 50
}
};
await ExpoStreamer.setSoundConfig(soundConfig);
// Use typed encoding constants
await ExpoStreamer.playAudio(
audioData,
'turn-1',
EncodingTypes.PCM_S16LE
);
Voice-Optimized Configuration
// Voice processing with 24000 Hz sample rate (recommended for voice applications)
const voiceConfig: RecordingConfig = {
sampleRate: SampleRates.SR_24000, // Voice-optimized sample rate
channels: 1, // Mono for voice
encoding: RecordingEncodingTypes.PCM_16BIT,
interval: 50, // Fast response for real-time voice
voiceProcessing: true, // Enable platform AEC/NS/AGC when needed
preGainDb: 6, // Optional gain boost for softer microphones
onAudioStream: async (event: AudioDataEvent) => {
// Process voice data with optimal settings
console.log('Voice data:', {
soundLevel: event.soundLevel,
dataLength: event.data.length
});
}
};
const soundConfig: SoundConfig = {
sampleRate: SampleRates.SR_24000,
playbackMode: PlaybackModes.VOICE_PROCESSING,
enableBuffering: true,
bufferConfig: {
targetBufferMs: 50, // Lower latency for voice
maxBufferMs: 200,
minBufferMs: 25
}
};
await ExpoStreamer.startRecording(voiceConfig);
await ExpoStreamer.setSoundConfig(soundConfig);
> **Tip:** `voiceProcessing` now defaults to `false` so recordings capture the hotter, unprocessed microphone signal. Toggle it on when you need the built-in echo cancellation/noise suppression pipeline. Use `preGainDb` (range โ24 dB to +24 dB) to fine-tune input loudness without clipping.
Event Handling with TypeScript
import {
ExpoStreamer,
AudioDataEvent,
SoundChunkPlayedEventPayload
} from 'expo-streamer';
// Subscribe to audio events with proper typing
const subscription = ExpoStreamer.subscribeToAudioEvents(
async (event: AudioDataEvent) => {
console.log('Audio event:', {
data: event.data,
soundLevel: event.soundLevel,
position: event.position
});
}
);
// Subscribe to playback events
const playbackSubscription = ExpoStreamer.subscribeToSoundChunkPlayed(
async (event: SoundChunkPlayedEventPayload) => {
console.log('Chunk played:', {
isFinalChunk: event.isFinalChunk,
turnId: event.turnId
});
}
);
// Clean up subscriptions
subscription?.remove();
playbackSubscription?.remove();
Stopping Audio Playback
import { ExpoStreamer } from 'expo-streamer';
// Graceful stop - allows buffered audio to finish
await ExpoStreamer.stopAudio();
// Immediate flush - clears buffer and stops mid-stream
await ExpoStreamer.flushAudio();
When to use stopAudio()
vs flushAudio()
:
stopAudio()
: Use when you want to gracefully stop playback, allowing any buffered audio to finish playing. Good for natural conversation endings.flushAudio()
: Use when you need to immediately stop all audio output, such as when the user interrupts or cancels playback. This clears all scheduled audio buffers without waiting for them to drain.
// Example: User interruption handling
async function handleUserInterrupt() {
// Immediately stop all audio
await ExpoStreamer.flushAudio();
// Clear the turn queue
await ExpoStreamer.clearSoundQueueByTurnId(currentTurnId);
console.log('Audio interrupted and flushed');
}
๐ API Reference
Core Types
interface RecordingConfig {
sampleRate?: SampleRate; // SampleRates.SR_16000 | SR_24000 | SR_44100 | SR_48000
channels?: number; // 1 (mono) or 2 (stereo)
encoding?: RecordingEncodingType; // RecordingEncodingTypes.PCM_8BIT | PCM_16BIT | PCM_32BIT
interval?: number; // Callback interval in milliseconds
onAudioStream?: (event: AudioDataEvent) => void;
}
interface AudioDataEvent {
data: string; // Base64 encoded audio data
position: number; // Position in the audio stream
soundLevel?: number; // Audio level for visualization
eventDataSize: number;
totalSize: number;
}
interface SoundConfig {
sampleRate?: SampleRate; // SampleRates.SR_16000 | SR_24000 | SR_44100 | SR_48000
playbackMode?: PlaybackMode; // PlaybackModes.REGULAR | VOICE_PROCESSING | CONVERSATION
useDefault?: boolean;
enableBuffering?: boolean;
bufferConfig?: Partial<IAudioBufferConfig>;
}
// Available Enum Constants
const RecordingEncodingTypes = {
PCM_32BIT: 'pcm_32bit',
PCM_16BIT: 'pcm_16bit',
PCM_8BIT: 'pcm_8bit',
} as const;
const SampleRates = {
SR_16000: 16000,
SR_24000: 24000,
SR_44100: 44100,
SR_48000: 48000,
} as const;
const PlaybackModes = {
REGULAR: 'regular',
VOICE_PROCESSING: 'voiceProcessing',
CONVERSATION: 'conversation',
} as const;
const EncodingTypes = {
PCM_F32LE: 'pcm_f32le',
PCM_S16LE: 'pcm_s16le',
} as const;
Recording Methods
Method | Return Type | Description |
---|---|---|
startRecording(config: RecordingConfig) |
Promise<StartRecordingResult> |
Start microphone recording |
stopRecording() |
Promise<AudioRecording> |
Stop recording and return data |
pauseRecording() |
Promise<void> |
Pause current recording |
resumeRecording() |
Promise<void> |
Resume paused recording |
Playback Methods
Method | Return Type | Description |
---|---|---|
playAudio(data: string, turnId: string, encoding?: Encoding) |
Promise<void> |
Play base64 audio data |
pauseAudio() |
Promise<void> |
Pause current playback |
stopAudio() |
Promise<void> |
Stop all audio playback |
flushAudio() |
Promise<void> |
Immediately flush audio buffer and stop mid-stream |
clearPlaybackQueueByTurnId(turnId: string) |
Promise<void> |
Clear queue for specific turn |
Configuration Methods
Method | Return Type | Description |
---|---|---|
setSoundConfig(config: SoundConfig) |
Promise<void> |
Configure audio playback |
getPermissionsAsync() |
Promise<PermissionResponse> |
Check microphone permissions |
requestPermissionsAsync() |
Promise<PermissionResponse> |
Request microphone permissions |
๐งช Testing
# Run all tests with TypeScript checking
npm run test:all
# Individual test suites
npm test # Jest (TypeScript)
npm run test:android # Android test analysis
npm run test:ios # iOS test guide
npm run test:coverage # Coverage report
Note: Android and iOS native tests require running within an Expo app context due to module dependencies. The test:android
command provides static analysis and validation of the Android test code structure.
๐๏ธ Architecture
Built with enterprise-grade patterns and full TypeScript support:
- ๐ Type Safety: Comprehensive TypeScript definitions for all APIs
- ๐๏ธ SOLID Principles: Single responsibility, dependency injection, interface segregation
- ๐งต Thread Safety: Proper synchronization with DispatchQueue (iOS) and Mutex (Android)
- ๐ก๏ธ Error Handling: Result types and graceful degradation
- ๐พ Memory Management: Efficient buffer pooling and automatic cleanup
๐ค Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Development Setup
# Clone and setup
git clone https://github.com/truemagic-coder/expo-streamer.git
cd expo-streamer
npm install
# Run example app
cd example
npm run ios # or npm run android
Version Management
The package uses synchronized versioning across all platforms:
- npm package: Version defined in
package.json
- iOS podspec: Automatically reads from
package.json
viapackage['version']
- Android gradle: Automatically reads from
package.json
viagetPackageJsonVersion()
To update the version, only change package.json
- all other platforms will sync automatically.
Code Standards
- Full TypeScript support with strict mode
- Follow SOLID principles
- Include comprehensive tests
- Ensure thread safety
- Document all public APIs
๐ License
MIT License - see LICENSE file for details.
Acknowledgments: This project is a hard fork based on original work by Alexander Demchuk, also under MIT License.
๐ Support
- ๐ Bug Reports: GitHub Issues
- ๐ก Feature Requests: GitHub Discussions
- ๐ Documentation: GitHub Wiki