Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Use every audio intelligence feature Deepgram offers — sentiment analysis, summarization, topic detection, entity detection, intent detection, and speaker diarization — in a single /v1/listen call with Nova-3. Feed the enriched transcript to Mavera for structured meeting analysis: decisions, action items, risk signals, and stakeholder dynamics. Flow: Deepgram POST /v1/listen?model=nova-3&smart_format=true&summarize=v2&detect_topics=true&detect_entities=true&sentiment=true&diarize=true&intents=true → enriched transcript → Mavera POST /mave/chat → Structured meeting intelligence

Code

import os, requests, time, json

DG = os.environ["DEEPGRAM_API_KEY"]
MV = os.environ["MAVERA_API_KEY"]
MV_BASE = "https://app.mavera.io/api/v1"
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

params = {
    "model": "nova-3", "smart_format": "true", "summarize": "v2",
    "detect_topics": "true", "detect_entities": "true", "sentiment": "true",
    "diarize": "true", "intents": "true", "punctuate": "true",
    "paragraphs": "true", "utterances": "true", "language": "en",
}
with open("strategy-meeting-2026-03-17.wav", "rb") as f:
    resp = requests.post("https://api.deepgram.com/v1/listen", params=params,
        headers={"Authorization": f"Token {DG}", "Content-Type": "audio/wav"},
        data=f, timeout=120)
resp.raise_for_status()
r = resp.json()

summary = r["results"].get("summary", {}).get("short", "")
topics = [t["topic"] for s in r["results"].get("topics",{}).get("segments",[])
          for t in s.get("topics",[])]
entities = [f"{e['label']}: {e['value']}" for s in r["results"].get("entities",{}).get("segments",[])
            for e in s.get("entities",[])]
sent_dist = {}
for s in r["results"].get("sentiments",{}).get("segments",[]):
    k = s.get("sentiment","neutral"); sent_dist[k] = sent_dist.get(k,0)+1
speakers = [f"Speaker {u.get('speaker','?')}: {u['transcript'][:200]}"
    for u in r.get("results",{}).get("utterances",[])[:40]]

enriched = (f"SUMMARY:\n{summary}\n\nTOPICS:\n{chr(10).join(topics[:10])}\n\n"
    f"ENTITIES:\n{chr(10).join(entities[:15])}\n\nSENTIMENT: {json.dumps(sent_dist)}\n\n"
    f"SPEAKERS:\n{chr(10).join(speakers)}")
time.sleep(1)

analysis = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"Meeting intelligence analyst. Deepgram audio intelligence:\n\n"
        f"{enriched[:10000]}\n\nProduce:\n"
        "1. **EXECUTIVE SUMMARY** — 3 sentences\n2. **DECISIONS MADE** — Decision, owner, context\n"
        "3. **ACTION ITEMS** — Task, owner, deadline\n4. **RISK SIGNALS** — Negative sentiment + mismatches\n"
        "5. **STAKEHOLDER DYNAMICS** — Who drove conversation, who was silent\n"
        "6. **TOPIC DEEP-DIVE** — Each topic with strategic implications\n"
}).json()
print(analysis.get("content", "")[:4000])

Example Output

Transcript: 24,187 chars | Topics: 8 | Entities: 12 | Speakers: 4

EXECUTIVE SUMMARY
Q2 roadmap review with 4 stakeholders. Dashboard v3 ships Apr 11,
API partner program launches May 1, enterprise SSO prioritized.

DECISIONS MADE
1. Dashboard v3 ships Apr 11 — Sarah (VP Product)
2. API partner program May 1 — Marcus owns onboarding

RISK SIGNALS
- Negative sentiment spike at 14:32 re: integration timeline
- Speaker 3 silent after minute 22 — possible disengagement

Error Handling

Deepgram supports files up to 2 GB. For larger files, split with ffmpeg -i meeting.wav -f segment -segment_time 1800 -c copy chunk_%03d.wav. Concatenate results before sending to Mavera.
Some features require minimum duration. Summarization needs 30+ seconds, topic detection needs 60+. Check nested keys before accessing.
Deepgram uses concurrency-based limits. Retry with exponential backoff: time.sleep(2 ** attempt). Limit batch processing to 5 concurrent requests.