development
location
documentation
public
voice
AI
What is Natural voice conversations for AI assistants using Model Context Protocol.?
Voice Mode brings human-like voice interactions to Claude Code and AI code editors through the Model Context Protocol (MCP). It supports multiple transports, real-time interactions, and integrates seamlessly with various AI coding assistants.
Natural voice conversations for AI assistants. Voice Mode brings human-like voice interactions to Claude Code, AI code editors through the Model Context Protocol (MCP).
π₯οΈ Compatibility
Runs on: Linux β’ macOS β’ Windows (WSL) β’ NixOS | Python: 3.10+
β¨ Features
ποΈ Voice conversations with Claude - ask questions and hear responses
π Multiple transports - local microphone or LiveKit room-based communication
π£οΈ OpenAI-compatible - works with any STT/TTS service (local or cloud)
β‘ Real-time - low-latency voice interactions with automatic transport selection
π§ MCP Integration - seamless with Claude Desktop and other MCP clients
π― Silence detection - automatically stops recording when you stop speaking (no more waiting!)
π― Simple Requirements
All you need to get started:
π€ Computer with microphone and speakers OR βοΈ LiveKit server (LiveKit Cloud or self-hosted)
π OpenAI API Key (optional) - Voice Mode can install free, open-source transcription and text-to-speech services locally
Quick Start
π Using a different tool? See our Integration Guides for Cursor, VS Code, Gemini CLI, and more!
Automatic Installation (Recommended)
Install Claude Code with Voice Mode configured and ready to run on Linux, macOS, and Windows WSL:
Note for WSL2 users: WSL2 requires additional audio packages (pulseaudio, libasound2-plugins) for microphone access. See our WSL2 Microphone Access Guide if you encounter issues.
Follow the Ubuntu/Debian instructions above within WSL.
Voice Mode includes a flake.nix with all required dependencies. You can either:
Use the development shell (temporary):
nix develop github:mbailey/voicemode
Install system-wide (see Installation section below)
Quick Install
claude mcp add --scope user voice-mode uvx voice-mode
# Using Claude Code with Nix (NixOS)
claude mcp add voice-mode nix run github:mbailey/voicemode
# Using UV
uvx voice-mode
# Using pip
pip install voice-mode
# Using Nix (NixOS)
nix run github:mbailey/voicemode
Configuration for AI Coding Assistants
π Looking for detailed setup instructions? Check our comprehensive Integration Guides for step-by-step instructions for each tool!
Below are quick configuration snippets. For full installation and setup instructions, see the integration guides above.
claude mcp add voice-mode -- uvx voice-mode
Or with environment variables:
claude mcp add voice-mode --env OPENAI_API_KEY=your-openai-key -- uvx voice-mode
Have a voice conversation - speak and optionally listen
message, wait_for_response (default: true), listen_duration (default: 30s), transport (auto/local/livekit)
listen_for_speech
Listen for speech and convert to text
duration (default: 5s)
check_room_status
Check LiveKit room status and participants
None
check_audio_devices
List available audio input/output devices
None
start_kokoro
Start the Kokoro TTS service
models_dir (optional, defaults to ~/Models/kokoro)
stop_kokoro
Stop the Kokoro TTS service
None
kokoro_status
Check the status of Kokoro TTS service
None
install_whisper_cpp
Install whisper.cpp for local STT
install_dir, model (default: base.en), use_gpu (auto-detect)
install_kokoro_fastapi
Install kokoro-fastapi for local TTS
install_dir, port (default: 8880), auto_start (default: true)
Note: The converse tool is the primary interface for voice interactions, combining speaking and listening in a natural flow.
New: The install_whisper_cpp and install_kokoro_fastapi tools help you set up free, private, open-source voice services locally. See Installation Tools Documentation for detailed usage.
The only required configuration is your OpenAI API key:
export OPENAI_API_KEY="your-key"
Optional Settings
export STT_BASE_URL="http://127.0.0.1:2022/v1" # Local Whisper
export TTS_BASE_URL="http://127.0.0.1:8880/v1" # Local TTS
export TTS_VOICE="alloy" # Voice selection
# Or use voice preference files (see Configuration docs)\n\n# Project: /your-project/voices.txt or /your-project/.voicemode/voices.txt\n\n# User: ~/voices.txt or ~/.voicemode/voices.txt
# LiveKit (for room-based communication)\n\n# See docs/livekit/ for setup guide
export LIVEKIT_URL="wss://your-app.livekit.cloud"
export LIVEKIT_API_KEY="your-api-key"
export LIVEKIT_API_SECRET="your-api-secret"
# Debug mode
export VOICEMODE_DEBUG="true"
# Save all audio (TTS output and STT input)
export VOICEMODE_SAVE_AUDIO="true"
# Audio format configuration (default: pcm)
export VOICEMODE_AUDIO_FORMAT="pcm" # Options: pcm, mp3, wav, flac, aac, opus
export VOICEMODE_TTS_AUDIO_FORMAT="pcm" # Override for TTS only (default: pcm)
export VOICEMODE_STT_AUDIO_FORMAT="mp3" # Override for STT upload
# Format-specific quality settings
export VOICEMODE_OPUS_BITRATE="32000" # Opus bitrate (default: 32kbps)
export VOICEMODE_MP3_BITRATE="64k" # MP3 bitrate (default: 64k)
Audio Format Configuration
Voice Mode uses PCM audio format by default for TTS streaming for optimal real-time performance:
PCM (default for TTS): Zero latency, best streaming performance, uncompressed
MP3: Wide compatibility, good compression for uploads
WAV: Uncompressed, good for local processing
FLAC: Lossless compression, good for archival
AAC: Good compression, Apple ecosystem
Opus: Small files but NOT recommended for streaming (quality issues)
The audio format is automatically validated against provider capabilities and will fallback to a supported format if needed.
Local STT/TTS Services
For privacy-focused or offline usage, Voice Mode supports local speech services:
Whisper.cpp - Local speech-to-text with OpenAI-compatible API
Kokoro - Local text-to-speech with multiple voice options
These services provide the same API interface as OpenAI, allowing seamless switching between cloud and local processing.
OpenAI API Compatibility Benefits
By strictly adhering to OpenAI's API standard, Voice Mode enables powerful deployment flexibility:
π Transparent Routing: Users can implement their own API proxies or gateways outside of Voice Mode to route requests to different providers based on custom logic (cost, latency, availability, etc.)
π― Model Selection: Deploy routing layers that select optimal models per request without modifying Voice Mode configuration
π° Cost Optimization: Build intelligent routers that balance between expensive cloud APIs and free local models
π§ No Lock-in: Switch providers by simply changing the BASE_URL - no code changes required
Example: Simply set OPENAI_BASE_URL to point to your custom router:
export OPENAI_BASE_URL="https://router.example.com/v1"
export OPENAI_API_KEY="your-key"\n\n# Voice Mode now uses your router for all OpenAI API calls
The OpenAI SDK handles this automatically - no Voice Mode configuration needed!