Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

You run a webinar series on Vimeo — a multi-session program where engagement typically declines across sessions. This job pulls all videos from a Vimeo showcase (album/folder) representing the series, runs Video Analysis on each session, then asks Mave to diagnose engagement shifts: “How does engagement shift across this series? Where do we lose them?” The result is a session-by-session intelligence report that shows you exactly where audience interest peaks, drops, and recovers — so you can restructure future series for maximum retention.

Architecture

Code

import os, requests, time

VM = os.environ["VIMEO_ACCESS_TOKEN"]
MV = os.environ["MAVERA_API_KEY"]
VM_BASE = "https://api.vimeo.com"
MV_BASE = "https://app.mavera.io/api/v1"
VM_H = {"Authorization": f"Bearer {VM}", "Accept": "application/vnd.vimeo.*+json;version=3.4"}
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

ALBUM_ID = "12345678"

# 1. Fetch all videos in the showcase/album (series)
album_resp = requests.get(f"{VM_BASE}/me/albums/{ALBUM_ID}/videos", headers=VM_H, params={
    "per_page": 50, "sort": "manual",
    "fields": "uri,name,link,duration,stats,created_time,description",
}).json()

if "error" in album_resp:
    raise SystemExit(f"Vimeo API error: {album_resp.get('developer_message', '')}")

sessions = []
for v in album_resp.get("data", []):
    vid_id = v["uri"].split("/")[-1]
    stats = v.get("stats", {})
    sessions.append({
        "id": vid_id, "name": v["name"], "link": v["link"],
        "duration": v.get("duration", 0),
        "plays": stats.get("plays", 0),
        "finishes": stats.get("finishes", 0),
        "finish_rate": round(stats.get("finishes", 0) / max(stats.get("plays", 1), 1) * 100, 1),
        "created": v.get("created_time", "")[:10],
        "description": v.get("description", "")[:200],
    })

# 2. Get album metadata
album_meta = requests.get(f"{VM_BASE}/me/albums/{ALBUM_ID}", headers=VM_H, params={
    "fields": "name,description,metadata.connections.videos.total",
}).json()
series_name = album_meta.get("name", f"Album {ALBUM_ID}")

print(f"Series: \"{series_name}\"{len(sessions)} sessions")

# 3. Analyze each session via Mavera
session_scores = []
for i, session in enumerate(sessions):
    print(f"  Analyzing session {i+1}/{len(sessions)}: \"{session['name'][:40]}\"")

    upload = requests.post(f"{MV_BASE}/assets", headers=MV_H, json={
        "url": session["link"], "name": f"[S{i+1}] {session['name'][:60]}", "type": "video",
    }).json()

    analysis = requests.post(f"{MV_BASE}/video-analysis", headers=MV_H, json={
        "asset_id": upload["id"],
        "analysis_types": [
            "emotional_arc", "hook_score", "pacing", "cognitive_load",
            "message_clarity", "behavioral_effectiveness",
        ],
        "metadata": {"session_number": i + 1, "series": series_name},
    }).json()

    for _ in range(30):
        time.sleep(3)
        status = requests.get(
            f"{MV_BASE}/video-analysis/{analysis['id']}", headers=MV_H
        ).json()
        if status.get("status") == "completed":
            break

    r = status.get("results", {})
    session_scores.append({
        **session,
        "session_num": i + 1,
        "hook": r.get("hook_score", {}).get("score", 0),
        "emotion_avg": r.get("emotional_arc", {}).get("intensity_avg", 0),
        "emotion_peak": r.get("emotional_arc", {}).get("peak_intensity", 0),
        "clarity": r.get("message_clarity", {}).get("score", 0),
        "pacing_score": r.get("pacing", {}).get("score", 0),
        "cog_load": r.get("cognitive_load", {}).get("average", 0),
        "arc_summary": r.get("emotional_arc", {}).get("summary", ""),
    })
    time.sleep(1)

# 4. Series arc analysis via Mave
session_block = "\n\n".join(
    f"SESSION {s['session_num']}: \"{s['name']}\"\n"
    f"  Date: {s['created']} | Duration: {s['duration']}s | Plays: {s['plays']:,} | Finish Rate: {s['finish_rate']}%\n"
    f"  Hook: {s['hook']}/100 | Emotion Avg: {s['emotion_avg']:.1f}/10 | Peak: {s['emotion_peak']:.1f}/10\n"
    f"  Clarity: {s['clarity']}/100 | Pacing: {s['pacing_score']:.1f}/10 | Cog Load: {s['cog_load']:.1f}/10\n"
    f"  Arc: {s['arc_summary'][:150]}"
    for s in session_scores
)

series_analysis = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"""Analyze engagement patterns across this webinar series.

SERIES: "{series_name}" ({len(session_scores)} sessions)

{session_block}

Produce:
1. **Engagement Arc**: How does audience engagement shift across sessions? Where does it peak and where does it drop?
2. **Drop-Off Diagnosis**: At which session do we lose the most viewers? What specifically about that session's creative quality explains the drop?
3. **Session Comparison**: Which session is the strongest and weakest? What makes them different?
4. **Pacing Pattern**: Does the series maintain energy or does it fatigue viewers? Is cognitive load increasing across sessions?
5. **Recommendations**: How should we restructure the next series to maintain engagement? Specific changes to session order, content density, and hook strategy.
6. **Recovery Opportunities**: Can we save the weakest session with a re-edit? What specifically should change?""",
}).json()

print(f"\nWEBINAR SERIES INTELLIGENCE — \"{series_name}\"")
print("=" * 65)
print(f"{'Session':<8} {'Title':<30} {'Plays':>7} {'Finish%':>8} {'Hook':>6} {'Emotion':>8} {'Clarity':>8}")
print("-" * 65)
for s in session_scores:
    print(f"  S{s['session_num']:<5} {s['name'][:28]:<30} {s['plays']:>7,} {s['finish_rate']:>7.1f}% "
          f"{s['hook']:>5} {s['emotion_avg']:>7.1f} {s['clarity']:>7}")
print("\n" + series_analysis.get("content", "")[:2500])

Example Output

Series: "Marketing Masterclass 2026" — 6 sessions
  Analyzing S1: "The Landscape: Where Marketing Is Heading"
  Analyzing S2: "Audience Research That Actually Works"
  Analyzing S3: "Content Strategy Framework"
  Analyzing S4: "Paid Media Optimization Deep Dive"
  Analyzing S5: "Measurement & Attribution"
  Analyzing S6: "Putting It All Together — Live Workshop"

WEBINAR SERIES INTELLIGENCE — "Marketing Masterclass 2026"
=================================================================
Session  Title                          Plays  Finish%   Hook  Emotion  Clarity
-----------------------------------------------------------------
  S1     The Landscape: Where Market     2,400   71.2%     82      7.8       88
  S2     Audience Research That Actu     1,800   68.5%     75      7.1       91
  S3     Content Strategy Framework      1,200   54.3%     61      5.4       74
  S4     Paid Media Optimization Dee       840   42.1%     48      4.2       69
  S5     Measurement & Attribution         620   38.7%     44      3.8       65
  S6     Putting It All Together —         580   61.2%     71      6.9       82

## Series Engagement Arc

### Peak: Session 1 (71.2% finish rate, hook 82)
Strong opening with broad relevance. The "landscape" format gives
viewers context without demanding commitment. High emotion (7.8) from
future-focused optimism and FOMO triggers.

### Critical Drop: Sessions 3-5 (finish rate drops from 68% to 39%)
The series loses 56% of Session 1's audience by Session 5. The pattern:
- **Session 3**: Cognitive load spikes (7.2/10) — the framework is too dense
  for a webinar format. Pacing drops to 4.8/10. Viewers disengage when
  they can't keep up.
- **Session 4**: Hook score 48 — the title "Deep Dive" signals effort,
  scaring away casual viewers. Emotion drops to 4.2 — all tactical, no story.
- **Session 5**: "Measurement" is the hardest topic to make engaging.
  Clarity 65 suggests the content assumes too much prior knowledge.

### Recovery: Session 6 (61.2% finish rate)
The live workshop format re-engages survivors. Interactive format lifts
emotion back to 6.9. Viewers who made it to S6 are committed.

### Recommendations for Next Series
1. **Front-load high-emotion sessions** — move the workshop to S3 to catch
   viewers before they drop
2. **Cap cognitive load at 5.0/10** — Session 3's framework needs to be split
   into two lighter sessions
3. **Rename Session 4** — "Deep Dive" repels. Try "The 3 Paid Media Wins
   You Can Implement This Week" (action-oriented, bounded)
4. **Add hooks to every session** — Sessions 4-5 open with slides. Open with
   a result: "This framework saved one client $240K last quarter"
5. **Mid-series re-engagement email** — After Session 3, send a "catch up in
   5 minutes" summary to win back drop-offs

Error Handling

Vimeo uses “albums” (also called showcases) to group videos. The API endpoint is /me/albums/{id}/videos. Folders (for organization) use a different endpoint: /me/folders/{id}/videos. Ensure you’re using the correct one.
The sort=manual parameter returns videos in the order you arranged them in the showcase. If you want chronological order instead, use sort=date. The session numbering in the code follows the API sort order.
For long series, batch Video Analysis into groups of 5 with 60-second pauses between groups to respect both Vimeo and Mavera rate limits. The total analysis time scales linearly with session count.

What’s Next

Vimeo Integration

Back to Vimeo integration overview

Video Library Analysis

Score your entire video catalog

Video Analysis API

Full reference for POST /api/v1/video-analysis

Mave Agent

Full reference for POST /api/v1/mave/chat