CODE EXAMPLES
Copy-paste examples for every Atlas API workflow. Jobs use the async pattern: submit → poll → download, or skip polling entirely with webhooks.
Or jump straight to a working app
pip install requestshttps://api.atlasv1.comEvery generation endpoint is async. You submit a job and get back a job_id. Then either poll GET /v1/jobs/{id} until complete, or pass a callback_url to get notified via webhook. Download the result from GET /v1/jobs/{id}/result.
Quickstart
The simplest possible call — submit an audio file and a face image, poll for completion, then download the MP4.
Full Pipeline
Two-step flow: generate speech with ElevenLabs (or any TTS), then feed it into avatar generation. TTS returns audio directly; only the video step is an async job.
Step 1: Text → speech audio via ElevenLabs (or any TTS provider)
Step 2: Audio + Image → MP4 video via /v1/generate
Text → Video (ElevenLabs)
Two-step convenience flow: call ElevenLabs TTS for audio, then submit it with a face image to Atlas for avatar video generation. No built-in TTS needed.
ElevenLabs + Atlas
Use ElevenLabs for voice generation, then submit the audio to Atlas for avatar video. Same async poll pattern.
pip install elevenlabsOpenAI TTS + Atlas
Use OpenAI's text-to-speech for voice generation, then submit to Atlas for the avatar video.
pip install openaiWebhook Receiver
Skip polling entirely — pass a callback_url when submitting a job and Atlas will POST the result to your server when it's done. This example shows the submit side and a minimal Flask receiver.
Submit with webhook
Receive the webhook — Flask
Webhook + FastAPI
A production-ready async webhook receiver with signature verification, background downloading, and proper error handling.
pip install fastapi uvicorn httpxReact Hook — @northmodellabs/atlas-react
Connect a live avatar in React with a single hook call. The useAtlasSession() hook handles all LiveKit wiring — room lifecycle, video/audio tracks, microphone, transcriptions, and cleanup.
npm install @northmodellabs/atlas-react livekit-clientFrontend — React Component
Backend — API Route (Next.js)
React Hook — Passthrough Mode
Bring your own LLM + TTS and use publishAudio() to send audio to the avatar for lip-sync. Atlas provides the GPU compute and WebRTC video.
publishAudio() accepts a base64 string, Blob, or ArrayBuffer. It automatically mutes your mic during playback, publishes the audio as a LiveKit track for avatar lip-sync, plays it locally, and cleans up when done. See API docs for full details.
Manual LiveKit Integration
If you prefer full control, connect to a realtime session using the LiveKit client SDK directly. Your backend creates the session and passes the token to the client.
Simpler option: Use @northmodellabs/atlas-react to replace all this boilerplate with a single useAtlasSession() hook call.
Batch Processing
Generate multiple avatar videos from a list of scripts. Call ElevenLabs for each script (returns audio directly), then submit all video jobs to Atlas and poll in parallel.
Concurrency tip: Since jobs run server-side, you can submit all of them upfront, then poll in parallel. No need to wait for one to finish before submitting the next.
Production-Ready
A reusable function with proper error handling, rate limit retries, progress logging, and streaming file downloads. Supports both polling and webhook modes.
Webhook alternative: For server-to-server workflows, skip polling entirely by passing a callback_url. Atlas will POST the result to your endpoint when the job finishes. See examples 9–10 for receiver code.