Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Wistia’s engagement heatmap shows per-second viewer retention for every video — the exact moments where viewers rewatch, skip, or drop off. This job pulls the engagement data for a video, identifies the sharpest drop-off points, then sends the timestamp and context to Mave with the question: “Viewers drop off at 0:45. Research best practices. Suggest specific edits.” The result is a creative revision plan grounded in actual viewer behavior — not gut instinct — with research-backed recommendations for each drop-off moment.

Architecture

Code

import os, requests, time

WS = os.environ["WISTIA_API_TOKEN"]
MV = os.environ["MAVERA_API_KEY"]
WS_BASE = "https://api.wistia.com"
MV_BASE = "https://app.mavera.io/api/v1"
WS_H = {"Authorization": f"Bearer {WS}", "Accept": "application/json"}
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

MEDIA_HASHED_ID = "abc123def4"

# 1. Fetch media stats with engagement data
media_stats = requests.get(
    f"{WS_BASE}/v1/stats/medias/{MEDIA_HASHED_ID}.json", headers=WS_H
).json()

media_name = media_stats.get("name", MEDIA_HASHED_ID)
duration = media_stats.get("duration", 0)
play_count = media_stats.get("play_count", 0)
engagement = media_stats.get("engagement", 0)

print(f"Video: \"{media_name}\" | Duration: {duration:.0f}s | Plays: {play_count} | Engagement: {engagement:.1%}")

# 2. Get engagement graph (per-second retention)
engagement_data = media_stats.get("engagement_data", [])
if not engagement_data:
    medias_resp = requests.get(
        f"{WS_BASE}/v1/stats/medias/{MEDIA_HASHED_ID}/engagement.json", headers=WS_H
    ).json()
    engagement_data = medias_resp if isinstance(medias_resp, list) else []

# 3. Identify drop-off points (where retention drops >10% within 5 seconds)
drop_offs = []
for i in range(5, len(engagement_data)):
    current = engagement_data[i]
    previous = engagement_data[i - 5]
    drop = previous - current
    if drop > 0.10:
        timestamp_sec = i
        mins = timestamp_sec // 60
        secs = timestamp_sec % 60
        drop_offs.append({
            "timestamp": f"{mins}:{secs:02d}",
            "seconds": timestamp_sec,
            "retention_before": round(previous * 100, 1),
            "retention_after": round(current * 100, 1),
            "drop_pct": round(drop * 100, 1),
        })

# Deduplicate nearby drops (keep the steepest within 10-second windows)
filtered_drops = []
for d in sorted(drop_offs, key=lambda x: -x["drop_pct"]):
    if not any(abs(d["seconds"] - f["seconds"]) < 10 for f in filtered_drops):
        filtered_drops.append(d)

filtered_drops.sort(key=lambda x: x["seconds"])
print(f"\nIdentified {len(filtered_drops)} significant drop-off points:")
for d in filtered_drops:
    print(f"  {d['timestamp']} — Retention: {d['retention_before']}% → {d['retention_after']}% (−{d['drop_pct']}%)")

# 4. Send to Mave for creative optimization recommendations
drop_block = "\n".join(
    f"- At {d['timestamp']}: retention drops from {d['retention_before']}% to "
    f"{d['retention_after']}% (−{d['drop_pct']}% in 5 seconds)"
    for d in filtered_drops
)

# 5. Build retention curve summary
curve_points = []
for pct in [0, 10, 25, 50, 75, 90, 100]:
    idx = int(len(engagement_data) * pct / 100)
    if idx < len(engagement_data):
        curve_points.append(f"  {pct}% through: {engagement_data[idx]*100:.1f}% still watching")

optimization = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"""Analyze these video drop-off points and suggest specific creative edits.

VIDEO: "{media_name}" ({duration:.0f}s, {play_count:,} plays)
OVERALL ENGAGEMENT: {engagement:.1%}

RETENTION CURVE:
{chr(10).join(curve_points)}

DROP-OFF POINTS:
{drop_block}

For each drop-off point:
1. **Likely Cause**: What typically causes viewers to leave at this point in a video? Research best practices.
2. **Specific Edit**: Describe exactly what to change at this timestamp (cut, restructure, add element, change pacing)
3. **Before/After**: Write the transition from what's probably happening now to what should happen instead
4. **Expected Impact**: Estimate retention improvement if this edit is made

Also provide:
5. **Opening Assessment**: Is the first 5 seconds strong enough? What would improve it?
6. **Pacing Diagnosis**: Is the video front-loaded, back-loaded, or evenly paced?
7. **Optimal Length**: Based on the retention curve, what should this video's runtime be?""",
}).json()

print("\nHEATMAP-INFORMED CREATIVE OPTIMIZATION")
print("=" * 60)
print(f"Video: {media_name}")
print(f"Plays: {play_count:,} | Engagement: {engagement:.1%} | Drop-offs: {len(filtered_drops)}")
print("\n" + optimization.get("content", "")[:2500])

Example Output

Video: "Product Demo — Enterprise Dashboard" | 186s | 4,280 plays | 38.2%

3 drop-off points:
  0:45 — Retention: 72.3% → 58.1% (−14.2%)
  1:48 — Retention: 51.0% → 38.4% (−12.6%)
  2:41 — Retention: 34.2% → 22.8% (−11.4%)

HEATMAP-INFORMED CREATIVE OPTIMIZATION
============================================================

## Drop-Off Analysis

### At 0:45 (−14.2% retention)
**Likely Cause**: This is the classic "intro fatigue" point. The first 45
seconds likely contain branding, context-setting, or agenda slides. Viewers
who clicked expecting to see the product in action are leaving because
they haven't seen it yet.
**Specific Edit**: Cut the intro to 15 seconds maximum. Move the first
product screenshot or interaction to the 0:10 mark. Replace the agenda
slide with a 3-second text overlay: "Here's what you'll see in 3 minutes."
**Expected Impact**: +8-12% retention at 0:45 (from 58% to 66-70%)

### At 1:48 (−12.6% retention)
**Likely Cause**: Feature overload. At this point, the demo has likely
shown 3-4 features in rapid succession without connecting them to outcomes.
Viewers lose the thread of "why does this matter to me?"
**Specific Edit**: Insert a 10-second "so what" bridge at 1:40:
"That means your team saves 4 hours every Monday morning." Then transition
to the next feature. Pattern: Feature → Outcome → Feature → Outcome.
**Expected Impact**: +6-9% retention at 1:48

### At 2:41 (−11.4% retention)
**Likely Cause**: The video is too long for a product demo. Viewers who
made it this far have already decided whether they're interested. The
remaining 65 seconds are diminishing returns.
**Specific Edit**: End the video at 2:30 with a strong CTA. Move any
remaining features to a "Part 2" video for viewers who want depth.
**Expected Impact**: Eliminating the tail improves overall engagement
from 38% to an estimated 48-52%.

## Optimal Length
Based on the retention curve, this video should be **90-120 seconds**.
50% of viewers are gone by 1:48. Cut the runtime in half and engagement
will double.

Error Handling

Wistia’s engagement data is an array of floats (0.0-1.0) representing the fraction of viewers still watching at each second. Some older videos may lack this granularity. If engagement_data is empty, the video may not have enough plays to generate a heatmap (minimum ~5 plays).
Wistia uses hashed IDs (alphanumeric strings like abc123def4) in most API endpoints. Don’t confuse these with numeric media IDs. You can find the hashed ID in the video’s embed URL or via GET /v1/medias.json.
The 10% threshold is configurable. For shorter videos (under 60s), tighten to 5% — a 10% drop in a short video is more significant. For long-form (10+ min), relax to 15% to avoid false positives from natural attention fluctuation.

What’s Next

Wistia Integration

Back to Wistia integration overview

Viewer-Level Persona Mapping

Map viewers to psychographic personas

CTA Performance × Focus Group

Optimize CTA placement and messaging

Mave Agent

Full reference for POST /api/v1/mave/chat