Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

You have recorded meetings — sales calls, standups, strategy sessions — sitting as audio files. This job transcribes them with Whisper, then sends the transcript to Mavera’s Mave Agent for structured analysis: action items, decisions made, themes discussed, follow-up owners, and deadlines. Flow: OpenAI Whisper POST /audio/transcriptions → transcript text → Mavera POST /mave/chat → Structured meeting analysis

Code

import os, requests, time
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
MV = os.environ["MAVERA_API_KEY"]
MV_BASE = "https://app.mavera.io/api/v1"
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

AUDIO_FILE = "meeting-2026-03-17.mp3"

# 1. Transcribe with Whisper
print(f"Transcribing {AUDIO_FILE}...")
with open(AUDIO_FILE, "rb") as f:
    transcript = client.audio.transcriptions.create(
        model="whisper-1", file=f,
        response_format="verbose_json",
        timestamp_granularities=["segment"],
    )

text = transcript.text
duration = getattr(transcript, "duration", 0)
print(f"Transcribed: {len(text)} chars, {duration:.0f}s duration")

time.sleep(1)

# 2. Send to Mavera for structured analysis
analysis = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"Meeting analyst. Analyze this {duration:.0f}-second meeting transcript.\n\n"
        f"TRANSCRIPT:\n{text[:12000]}\n\n"
        "Extract:\n"
        "1. **MEETING SUMMARY** — 3-sentence overview\n"
        "2. **DECISIONS MADE** — Each decision, who made it, context\n"
        "3. **ACTION ITEMS** — Task, owner, deadline\n"
        "4. **KEY THEMES** — Top 5 themes with time ranges\n"
        "5. **FOLLOW-UPS** — What needs to happen before the next meeting\n"
        "6. **SENTIMENT** — Overall tone, any tension noted\n"
}).json()

print(f"\n{'='*60}\nMEETING ANALYSIS\n{'='*60}")
print(analysis.get("content", "")[:4000])

Example Output

Transcribed: 18,432 chars, 2,713s duration

MEETING SUMMARY
Weekly product sync covering Q2 roadmap prioritization and the
enterprise launch. Dashboard redesign ships Apr 4, mobile SDK deferred to Q3.

DECISIONS MADE
1. Dashboard redesign ships Apr 4 — Sarah (VP Product)
2. Mobile SDK deferred to Q3 — full team
3. Hire 2 data engineers before May 1 — Dan owns

ACTION ITEMS
- [ ] Sarah: Share wireframes by Mar 21
- [ ] Dan: Post job listings by Mar 19
- [ ] Priya: Load tests on staging by Mar 24

SENTIMENT: Collaborative, focused. Minor tension on SDK timeline — resolved.

Error Handling

Whisper accepts files up to 25 MB. For longer recordings, split with ffmpeg -i meeting.mp3 -f segment -segment_time 600 -c copy chunk_%03d.mp3 and transcribe each chunk. Concatenate transcripts before sending to Mavera.
Whisper supports mp3, mp4, mpeg, mpga, m4a, wav, and webm. Convert other formats with ffmpeg -i input.ogg output.mp3 before uploading.
Whisper has per-minute request limits. If you get a 429, implement exponential backoff: time.sleep(2 ** attempt). Batch multiple files with 2-second gaps between requests.