Quick Start

Get your AI agent into a voice meeting in minutes. Pick the integration that fits your setup.

1. Get an API Key

Sign up at chamade.io/dashboard, then create an API key. It starts with chmd_ and is shown only once — store it securely.

2. Choose Your Integration

MCP Server (HTTP — recommended)

For Claude Desktop, Claude Code, Cursor, Windsurf, and any MCP client that supports the Streamable HTTP transport. Drop the snippet below in your MCP config file (.mcp.json, claude_desktop_config.json, .cursor/mcp.json…) and restart the client.

json
{ "mcpServers": { "chamade": { "type": "http", "url": "https://mcp.chamade.io/mcp/", "headers": { "Authorization": "Bearer chmd_..." } } } }

Claude Code only: also launch with claude --dangerously-load-development-channels server:chamade --continue on every session to receive push events (incoming calls, DMs) in real time. No flag needed for the tools themselves — they work immediately.

Legacy stdio-only clients (older MCP clients without Streamable HTTP): use the @chamade/mcp-server@3 stdio shim — it's a thin wrapper around mcp-remote that bridges stdio to the same hosted HTTP endpoint. See the full MCP setup guide for the shim config.

Full MCP setup guide →

REST API

For any language or framework. Direct HTTP calls — no MCP client needed. Works great for pure-backend agents, non-LLM orchestration, webhooks, and anything that speaks HTTP.

bash
curl -X POST https://chamade.io/api/call \ -H "X-API-Key: chmd_..." \ -H "Content-Type: application/json" \ -d '{"platform":"discord","meeting_url":"https://discord.com/channels/..."}'

Full API reference →

Voice calls — BYO audio WebSocket

For voice calls, MCP and REST give you control (join, leave, chat, state), but the raw audio flows on a separate WebSocket so your agent can plug its own STT/TTS stack (OpenAI Realtime, LiveKit Agents, Pipecat, Deepgram Voice Agent, ElevenLabs, Cartesia, Whisper cascade, etc.). POST /api/call returns an audio block that tells your host code where to connect and what PCM format to stream.

json
{ "call_id": "abc-123", "capabilities": ["audio_in", "audio_out", "read", "write"], "audio": { "stream_url": "wss://chamade.io/api/call/abc-123/stream", "format": "pcm_s16le", "sample_rate": 48000, "frame_duration_ms": 20, "direction": "bidirectional", "docs": "https://chamade.io/docs/api#websocket" } }

Your agent's host code (not the LLM itself) opens the stream_url in parallel with MCP/REST, pipes binary PCM frames in both directions between the call and your chosen STT/TTS, and lets the LLM drive the conversation via text. Hosted STT/TTS (Chamade runs the speech layer for you) is currently beta-gated — contact [email protected] for supervised access.

Audio WebSocket protocol →

3. Connect Your Platforms

Some platforms require setup in the Dashboard before use:

PlatformSetup needed
DiscordWorks out of the box (shared bot) or add your own bot
Microsoft TeamsConnect Microsoft account (OAuth)
Google MeetConnect Google account (OAuth)
ZoomNo setup — just pass a meeting URL
TelegramWorks out of the box or add your own bot
WhatsAppNo setup — invite the bot
SlackInstall the Slack app
Nextcloud TalkInstall addon + connect
SIP / PhoneActivate a phone number or bring your own trunk