Package Exports
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package () to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
GicellBot
A self-hosted voice-capable AI bot for Free4Talk voice rooms. It listens to voice conversations, responds through speech, plays music, and runs a mini game economy — all from a single Node.js process.
Built and maintained by Gilang Raja (@Gicelldev)
What it does
- 🎙️ Listens to voice — transcribes peer audio in real-time using Groq Whisper, detects a wake word, then responds via Edge TTS
- 🎵 Plays music — searches YouTube and streams audio directly into the WebRTC voice room
- 🤖 AI chat — converses in the room chat using NVIDIA NIM (configurable model), with per-user memory
- 🎮 Mini economy — daily rewards, grinding jobs, RPG hunting, gacha, gambling, shop
- 🔍 Web search + weather — auto-fetches Bing results and real-time weather when users ask
- 🔌 Plugin system — drop a
.jsfile into/plugins/and it loads automatically on next start
Requirements
- Node.js 18+
- yt-dlp installed and in PATH (for music streaming)
- A free Groq account (for Whisper STT)
- A free NVIDIA NIM account (for AI chat)
- A Free4Talk account for the bot to log in as
Setup
There are two ways to get started:
Option A — npx (recommended)
npx create-gicellbot my-bot
cd my-botCopies all bot files to my-bot/, runs npm install, and prints next steps.
Option B — clone from GitHub
git clone https://github.com/Gicelldev/free4talkbot.git my-bot
cd my-bot
npm installAfter either option:
1. Configure environment
cp .env.example .envOpen .env and fill in your API keys:
GROQ_API_KEYS=gsk_yourkey1,gsk_yourkey2
NIM_API_KEYS=nvapi-yourkey1,nvapi-yourkey2
BOT_NAME=GicellBot
ROOM_URL=https://www.free4talk.com/room/YOURROOM?key=YOURKEYMultiple keys are supported and rotated automatically — useful if you hit the free tier rate limit.
3. Log in to Free4Talk
npm run setupThis opens a browser and saves the session to ./profile/. You only need to do this once.
4. Customize the AI personality (optional)
Edit the buildSystemPrompt() function in ai.js. The default is a minimal template — add whatever persona, rules, or context you want.
5. Run
npm startVoice Mode
The bot listens to all peers in the voice room and runs each utterance through Whisper STT.
By default it only responds when someone says the bot's name first (wake word mode). You can switch to talk mode where it responds to everything.
!voice → show current status
!voice on/off → enable or disable voice listening
!talkmode → toggle talk mode on/off
!talkmode on/off → force a specific stateWake word detection is fuzzy — it handles common Whisper transcription quirks like "Gicil", "Gitel", "Kicel" etc. and still matches correctly.
Music Commands
| Command | Description |
|---|---|
!play <title or URL> |
Search YouTube and play |
!search <title> |
Show top 10 results |
!skip |
Skip current track |
!stop |
Stop and clear queue |
!queue / !q |
Show queue |
!np |
Now playing |
!repeat |
Toggle loop |
!vol <0-100> |
Set volume |
Audio effects:
!bass, !treble, !reverb, !8d, !speed, !nightcore, !vaporwave, !slowed, !fx, !fxreset
Economy Commands
| Command | Description |
|---|---|
!daily |
Claim daily reward |
!balance |
Check coin balance |
!shop |
Browse item shop |
!buy <item> |
Purchase an item |
!inv |
View inventory |
!hunt |
Go hunting (requires HP > 20) |
!heal |
Restore HP |
60+ grind jobs available. Each job requires specific equipment from the shop.
Writing a Plugin
Drop a file in ./plugins/ — it gets loaded on the next restart.
// plugins/hello.js
module.exports = {
commands: ['hello', 'hi'],
handle: async (cmd, args, msg, { sender, sendMessage }) => {
await sendMessage(`Hey ${sender.name}!`);
}
};The ctx object passed to handle includes:
sender—{ name, role, uid }sendMessage(text)— post to chatbotState— shared bot state (music, queue, etc.)page— Playwright page instancelog(msg, level)— structured logger
How it works under the hood
The bot runs a headless Chromium browser via Playwright, logs into Free4Talk as a normal user, and joins the room. From there:
- Music is streamed by fetching a YouTube audio URL via
yt-dlp, then injecting it directly into the WebRTC audio track using the Web Audio API — no virtual audio devices needed - Voice listening is done by intercepting WebRTC peer audio tracks in the browser, recording them as PCM chunks, and sending them to Node.js for Whisper transcription
- TTS is generated server-side via
msedge-ttsand played back through the same injected audio pipeline
Project Structure
├── manager.js Main entry point + web dashboard
├── server.js Bot logic + browser automation (Playwright)
├── ai.js AI chat handler (NVIDIA NIM, memory, web search)
├── voice.js Voice mode — wake word detection, STT, TTS
├── stt.js Groq Whisper integration
├── tts.js Edge TTS wrapper
├── commands.js Command router + music engine
├── economy.js Economy data layer
├── fun_api.js External API helpers (prayer times, jokes, etc.)
├── plugins/ Modular command plugins
└── public/ Dashboard frontendLicense
MIT