Use Deepgram’s streaming transcription via WebSocket for live event coverage, batch transcript segments every 30 seconds, and send to Mavera Generate for live-blogging content.
Use Deepgram’s streaming transcription via WebSocket for live event coverage. As transcript segments arrive in real time, batch them every 30 seconds and send to Mavera Generate for live-blogging content — turning a keynote into publishable blog snippets as it happens.Flow: Deepgram WebSocket wss://api.deepgram.com/v1/listen?model=nova-3&encoding=linear16&sample_rate=16000 → real-time segments → batch every 30s → Mavera POST /generations → Live blog posts
Streaming requires a live audio source. The examples read from a WAV file to simulate. In production, pipe from a microphone, RTMP feed, or SIP trunk. Match encoding and sample_rate to your source.
Connected to Deepgram streaming API [ 1] Welcome everyone to the 2026 product summit. [ 12] Let me show you the dashboard in action. LIVE BLOG #1: The 2026 Product Summit opens with CEO Maria Chen announcing three major platform updates. The headline: a new AI engine processing content 40% faster. "This isn't incremental," Chen says.LIVE BLOG COMPLETE: 8 posts generated
Exponential backoff reconnection (1s → 30s max, 5 attempts). In production, persist the transcript buffer to disk between reconnections. Send heartbeat pings every 10 seconds.
Audio encoding mismatch
Match encoding and sample_rate to your source: linear16+16000 (telephony), linear16+44100 (broadcast), opus+48000 (WebRTC). Mismatches produce garbled transcripts.
Interim vs. final results
interim_results=false returns only finalized transcripts (higher accuracy, slight delay). Set to true for lower latency. The code filters is_final to avoid duplicates.
Mavera generation latency
Each generation takes 2-5 seconds. For fast events, increase batch interval to 45-60 seconds for meatier blog segments.