Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Generate audio summaries of Focus Group results for executives who prefer listening over reading. Retrieve focus group data, synthesize findings with Mave Agent, then convert to an audio executive briefing. Flow: Mavera GET /focus-groups/{id}POST /mave/chat (synthesize) → ElevenLabs TTS → Audio briefing

Code

import os, requests, time
EL_KEY = os.environ["ELEVENLABS_API_KEY"]
EL_BASE = "https://api.elevenlabs.io/v1"
MV = os.environ["MAVERA_API_KEY"]
MV_BASE = "https://app.mavera.io/api/v1"
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}
os.makedirs("audio_reports", exist_ok=True)
FG_ID, VOICE_ID = "fg_abc123def456", "21m00Tcm4TlvDq8ikWAM"

# 1. Retrieve focus group results
fg = requests.get(f"{MV_BASE}/focus-groups/{FG_ID}", headers=MV_H).json()
questions, responses, personas = fg.get("questions", []), fg.get("responses", []), fg.get("personas", [])

# 2. Build response corpus
corpus = ""
for q in questions:
    corpus += f"\n## {q}\n"
    for r in [r for r in responses if r.get("question") == q]:
        name = next((p["name"] for p in personas if p["id"] == r.get("persona_id")), "Unknown")
        corpus += f"  [{name}]: {r.get('answer', '')}\n"

# 3. Synthesize briefing
synthesis = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"Executive briefing writer. Convert focus group results into a 2-3 minute audio script.\n\n"
        f"FOCUS GROUP: {fg.get('name')}\nPERSONAS: {', '.join(p['name'] for p in personas)}\n\n"
        f"RESPONSES:\n{corpus[:8000]}\n\n"
        "Structure: OPENING (context), KEY FINDINGS (top 3-4 with quotes), CONSENSUS POINTS, "
        "DIVERGENCE POINTS, RECOMMENDED ACTIONS (3 ranked), CLOSING.\n"
        "Write conversationally — this will be listened to, not read.",
}).json()
briefing = synthesis.get("content", "")
words = len(briefing.split())
print(f"Briefing: {words} words (~{words * 60 // 150}s audio)")
time.sleep(1)

# 4. Convert to audio
tts = requests.post(f"{EL_BASE}/text-to-speech/{VOICE_ID}",
    headers={"xi-api-key": EL_KEY, "Content-Type": "application/json"},
    json={"text": briefing, "model_id": "eleven_multilingual_v2",
          "voice_settings": {"stability": 0.6, "similarity_boost": 0.7,
                             "style": 0.2, "use_speaker_boost": True}})
if tts.status_code == 200:
    path = f"audio_reports/fg-briefing-{FG_ID}.mp3"
    with open(path, "wb") as f: f.write(tts.content)
    print(f"Audio saved: {path} ({len(tts.content) // 1024} KB, ~{words * 60 // 150}s)")

Example Output

Focus group: Q1 Brand Refresh Validation — 4 personas, 32 responses
Briefing: 412 words (~165s audio)
Audio saved: audio_reports/fg-briefing-fg_abc123def456.mp3 (287 KB, ~165s)

Preview: "Here's your executive briefing on the Q1 Brand Refresh focus
group. Messaging resonated strongly — all four personas rated the new
tagline clearer than the current version. The color palette divided
the room. Priority one: keep the tagline. Priority two: A/B test
both palettes with real traffic before committing."

Error Handling

Check fg.get("status") before processing. If still running, poll with 5-second intervals until completed. Incomplete data produces unreliable briefings.
ElevenLabs TTS accepts up to ~5,000 characters per request. For longer briefings, split at paragraph boundaries and make sequential TTS calls. Concatenate with ffmpeg -f concat -i list.txt -c copy briefing.mp3.
Use a calm, authoritative voice for executive briefings. ElevenLabs’ Rachel (21m00Tcm4TlvDq8ikWAM) works well. Stability 0.6+ and low style values produce professional narration.