Voice AI quickStart
Set up a working voice agent in under five minutes. This page walks you through installing the Agora skills to give your AI coding assistant the official quickstarts and Agora CLI workflows. You then use the CLI to log in, clone the official starter, and run it locally. You can follow the CLI steps yourself or paste the sample prompt and let your assistant handle setup for you.
Install Agora skills
Agora skills teaches your AI coding assistant how to work with Conversational AI projects using official starter repos and the Agora CLI, including signing in, project binding, generating environment files, and running diagnostics. Install the CLI in the next section, or ask your assistant to run the installer for you.
Paste the following prompt into your assistant's chat. You can replace Python with Typescript or Go depending on your language preference:
Your assistant installs the CLI, signs you in, scaffolds the official starter, and guides you through the remaining steps. Follow the manual CLI steps below if you prefer to run each command yourself.
Install the Agora CLI
The Agora CLI is a native Go binary available at AgoraIO-Community/cli.
- macOS and Linux
- Windows (PowerShell)
If the agora command is not found after installation, re-run with --add-to-path or
manually add the install directory to your shell profile.
If your execution policy blocks inline scripts, download install.ps1 and run:
Sign in, scaffold, and run
Sign in with the Agora CLI, clone the starter project, and configure it for your chosen language.
-
Sign in to Agora Console.
-
Use
agora initto clone the official starter for your chosen template, bind it to your Agora project, and write the runtime-specific environment file.- Python
- TypeScript
- Go
-
Open
http://localhost:3000and click Start conversation.
If the agent does not join or transcripts do not appear, run agora project doctor to
check credential validity, feature enablement, and network reachability.
Next steps
Now that you have a working agent, explore the following topics:
- Explore the SDK API references for Typescript, Python, and Go.
- Build a backend and client from scratch: Write every layer of the backend and frontend yourself.
- Integrate an MLLM: Replace the cascading STT → LLM → TTS pipeline with a single realtime model.
- Transmit custom information: Guide the agent with user-specific context to personalize responses.
- Integrate short-term memory: Help the agent maintain context across a conversation.
- Use filler words: Reduce perceived latency by filling silence during LLM processing.