Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Mavera Surfaces

SurfaceRole
Files (POST /files/upload-url, POST /files)Upload each video ad to Mavera
Video Analysis (POST /video-analyses)Frame-level scoring: engagement, emotion, attention, brand recall, CTA
Mave (POST /mave/chat)Synthesize all scores into a ranked quarterly audit

What Value Does Mavera Add?

ValueHow
InsuranceEvery ad gets an objective score before you commit next quarter’s budget. No more “I think Ad 3 was fine.”
Opening new doorsSide-by-side ranking surfaces patterns you’d never spot manually — like a CTA placement trend across 8 ads.
Saving timeA manual creative review meeting takes 2–4 hours. This pipeline runs in minutes and produces a shareable report.

When to Use This

  • End of quarter: score everything you shipped, rank it, carry learnings into next quarter’s briefs.
  • Pre-budget allocation: prove which creative styles deserve more spend.
  • Agency handoff: give your agency a data-backed scorecard instead of subjective feedback.
  • Creative retrospective: identify which elements (hooks, music, CTA timing) correlated with higher scores.

What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Workspace IDFrom your dashboard URL (ws_...).
4–12 video adsMP4 or MOV, 15–60 s each. More is fine — the pipeline scales linearly.
Credits~100–250 per video + ~15–30 for Mave. See Credits Estimate.
Python 3.8+ or Node.js 18+requests for Python; native fetch for Node.
MAVERA_API_KEY=mvra_live_your_key_here
MAVERA_WORKSPACE_ID=ws_your_workspace_id

The Flow

1

Collect video files

Gather all ads shipped last quarter into a single directory. Name them descriptively — q4_hero_30s.mp4, q4_retargeting_15s.mp4 — because filenames appear in the Mave prompt.
2

Upload each video via Files API

Presigned URL → PUT bytes → create file record. Collect asset IDs.
3

Run Video Analysis on every ad

Create analyses in a batch, then poll until all complete.
4

Normalize, rank, and synthesize with Mave

Build a sorted metrics table, feed it to Mave: “Rank these ads by performance potential. Which elements should we keep, and which should we change?”

Stage 1 — Upload Videos

import os, time, json, glob, requests

API_KEY = os.environ["MAVERA_API_KEY"]
WORKSPACE_ID = os.environ["MAVERA_WORKSPACE_ID"]
BASE = "https://app.mavera.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}


def upload_video(path: str) -> dict:
    with open(path, "rb") as f:
        content = f.read()
    name = os.path.basename(path)
    mime = "video/mp4" if path.lower().endswith(".mp4") else "video/quicktime"

    url_resp = requests.post(f"{BASE}/files/upload-url", headers=HEADERS, json={
        "file_name": name, "file_type": mime,
        "file_size": len(content), "workspace_id": WORKSPACE_ID,
    }).json()
    if "error" in url_resp:
        raise Exception(url_resp["error"]["message"])

    requests.put(url_resp["upload_url"], data=content,
                 headers={"Content-Type": mime}).raise_for_status()

    file_rec = requests.post(f"{BASE}/files", headers=HEADERS, json={
        "name": name, "type": mime, "url": url_resp["public_url"],
        "workspace_id": WORKSPACE_ID, "file_size": len(content),
    }).json()
    if "error" in file_rec:
        raise Exception(file_rec["error"]["message"])
    return {"id": file_rec["id"], "name": name}


def upload_all(directory: str) -> list[dict]:
    paths = sorted(glob.glob(os.path.join(directory, "*.mp4"))
                   + glob.glob(os.path.join(directory, "*.mov")))
    if not paths:
        raise FileNotFoundError(f"No video files in {directory}")
    assets = []
    for p in paths:
        asset = upload_video(p)
        print(f"  Uploaded {asset['name']}{asset['id']}")
        assets.append(asset)
    return assets
Name your files descriptively (q4_hero_30s.mp4, not video_3.mp4). Filenames appear in the Mave prompt and make the final report far more readable.

Stage 2 — Batch Video Analysis

Create all analyses up front, then poll. Mavera processes them concurrently — you wait roughly once, not per-ad.
def create_analysis(asset_id: str, label: str) -> dict:
    resp = requests.post(f"{BASE}/video-analyses", headers=HEADERS, json={
        "title": f"Q4 Audit: {label}", "asset_id": asset_id,
        "goal": "Score ad effectiveness: engagement, emotion, attention, brand recall, CTA",
        "brand": "Brand", "product": "Product",
        "primary_intent": "Drive purchase consideration",
        "chunk_duration": 5, "frames_per_chunk": 3, "workspace_id": WORKSPACE_ID,
    }).json()
    if "error" in resp:
        raise Exception(resp["error"]["message"])
    return resp


def poll_analysis(analysis_id: str, timeout_min: int = 20) -> dict:
    for _ in range(timeout_min * 4):
        resp = requests.get(f"{BASE}/video-analyses/{analysis_id}", headers=HEADERS).json()
        if "error" in resp:
            raise Exception(resp["error"]["message"])
        if resp["status"] == "COMPLETED":
            return resp
        if resp["status"] == "FAILED":
            raise Exception(f"Analysis {analysis_id} failed")
        time.sleep(15)
    raise TimeoutError(f"Analysis {analysis_id} timed out")


def analyze_batch(assets: list[dict]) -> list[dict]:
    jobs = []
    for asset in assets:
        job = create_analysis(asset["id"], asset["name"])
        print(f"  Created analysis {job['id']} for {asset['name']}")
        jobs.append({"analysis_id": job["id"], "name": asset["name"]})

    results = []
    for job in jobs:
        result = poll_analysis(job["analysis_id"])
        metrics = result.get("results", {}).get("full_video_metrics", {})
        print(f"  Completed {job['name']}: overall={metrics.get('overall_score')}")
        results.append({"name": job["name"], "analysis_id": job["analysis_id"], "metrics": metrics})
    return results

Stage 3 — Rank and Format

def rank_results(results: list[dict]) -> list[dict]:
    ranked = sorted(results, key=lambda r: r["metrics"].get("overall_score", 0), reverse=True)
    for i, r in enumerate(ranked):
        r["rank"] = i + 1
    return ranked


def format_metrics_table(ranked: list[dict]) -> str:
    lines = ["| Rank | Ad | Overall | Emotion | Attention | Brand Recall | CTA |",
             "|------|----|---------|---------|-----------|--------------|----|"]
    for r in ranked:
        m = r["metrics"]
        lines.append(f"| {r['rank']} | {r['name']} | {m.get('overall_score', '—')}/100 "
                      f"| {m.get('emotional_impact', '—')}/10 | {m.get('attention_score', '—')}/10 "
                      f"| {m.get('brand_recall_likelihood', '—')} | {m.get('cta_effectiveness', '—')}/10 |")
    return "\n".join(lines)


def format_chunk_highlights(results: list[dict]) -> str:
    highlights = []
    for r in results:
        chunks = r["metrics"].get("chunks", [])
        if not chunks:
            continue
        best = max(chunks, key=lambda c: c.get("engagement", 0))
        worst = min(chunks, key=lambda c: c.get("engagement", 0))
        highlights.append(f"- **{r['name']}**: peak at {best.get('start_time', '?')}s "
                          f"({best.get('engagement', '?')}), low at {worst.get('start_time', '?')}s "
                          f"({worst.get('engagement', '?')})")
    return "\n".join(highlights)

Stage 4 — Mave Synthesis

def generate_audit(ranked: list[dict]) -> str:
    table = format_metrics_table(ranked)
    highlights = format_chunk_highlights(ranked)

    prompt = f"""You are a senior creative strategist conducting a quarterly ad creative audit.

## Ranked Ad Performance
{table}

## Per-Ad Chunk Highlights
{highlights}

## Your Task
Produce a quarterly creative audit:
1. **Executive Summary** — 3-sentence overview of the quarter's creative performance.
2. **Ranked Scorecard** — Table with commentary on each ad's strengths and weaknesses.
3. **Elements to Keep** — Which creative elements (hooks, music, pacing, CTA placement, talent) appear in top ads? Be specific.
4. **Elements to Change** — Which elements correlate with lower scores? Patterns, not individual ads.
5. **Hook Analysis** — Compare the first chunk across all ads. Which opening strategies won?
6. **Brand Recall Deep Dive** — Why did some ads score higher? Logo placement? Early mention?
7. **Recommendations for Next Quarter** — 5 specific, actionable briefs for the creative team.

Cite specific ads by name. Use the scores above as evidence."""

    resp = requests.post(f"{BASE}/mave/chat", headers=HEADERS,
                         json={"message": prompt}, timeout=180).json()
    if "error" in resp:
        raise Exception(resp["error"]["message"])
    return resp["content"]

Running the Full Pipeline

def run_audit(ad_directory: str = "./q4_ads"):
    print("=== Quarterly Ad Creative Audit ===\n")

    print("Stage 1: Uploading videos...")
    assets = upload_all(ad_directory)
    print(f"  Uploaded {len(assets)} ads\n")

    print("Stage 2: Running Video Analysis (batch)...")
    results = analyze_batch(assets)
    print(f"  All {len(results)} analyses complete\n")

    print("Stage 3: Ranking...")
    ranked = rank_results(results)
    print(format_metrics_table(ranked))

    print("\nStage 4: Generating audit with Mave...")
    report = generate_audit(ranked)

    with open("quarterly_creative_audit.md", "w") as f:
        f.write(f"# Quarterly Creative Audit — {time.strftime('%B %Y')}\n\n{report}")
    print("Saved to quarterly_creative_audit.md")
    return ranked, report

if __name__ == "__main__":
    import sys
    run_audit(sys.argv[1] if len(sys.argv) > 1 else "./q4_ads")

Example Output

# Quarterly Creative Audit — March 2026

## Executive Summary
Q4 shipped 8 ads averaging 71/100. Top: q4_hero_30s (89/100, pain-point hook,
early brand mention). Bottom two: slow establishing shots, <40/100 first-chunk.

| Rank | Ad              | Overall | Emotion | CTA  |
|------|-----------------|---------|---------|------|
| 1    | q4_hero_30s     | 89/100  | 9/10    | 8/10 |
| 2    | q4_promo_15s    | 82/100  | 8/10    | 9/10 |
| 3    | q4_story_45s    | 74/100  | 8/10    | 6/10 |

## Keep: Pain-point hooks, early brand mention, fast cuts (≤3s shots)
## Change: Slow establishing shots (5+ seconds), late CTA (after 25s)

Variations

Run the same pipeline monthly with 2–4 ads. Track score trends by appending to a CSV:
import csv
with open("audit_trend.csv", "a", newline="") as f:
    writer = csv.DictWriter(f, fieldnames=["date", "ad", "overall", "emotion", "attention"])
    for r in ranked:
        writer.writerow({"date": time.strftime("%Y-%m-%d"), "ad": r["name"],
                         "overall": r["metrics"].get("overall_score"),
                         "emotion": r["metrics"].get("emotional_impact"),
                         "attention": r["metrics"].get("attention_score")})
Separate brand-awareness from performance/retargeting ads — different goals shouldn’t share a ranking scale.
brand_ads = [r for r in results if "brand" in r["name"].lower()]
perf_ads = [r for r in results if "retarget" in r["name"].lower() or "promo" in r["name"].lower()]
brand_report = generate_audit(rank_results(brand_ads))
perf_report = generate_audit(rank_results(perf_ads))
Take #1 and last-place ads into a Focus Group to validate scores with simulated audience reactions.
Pass webhook_url when creating analyses. Mavera POSTs to your endpoint on completion.
resp = requests.post(f"{BASE}/video-analyses", headers=HEADERS, json={
    "webhook_url": "https://your-server.com/hooks/analysis-complete",
    # ... other fields ...
})
For deeper audits, include per-chunk engagement curves in the Mave prompt.
def format_timeline(results):
    return "\n".join(
        f"- {r['name']}: [{', '.join(f'{c.get(\"start_time\", 0)}s:{c.get(\"engagement\", 0)}' for c in r['metrics'].get('chunks', []))}]"
        for r in results
    )

Credits Estimate

StageTypical CostNotes
File uploads (×N)0Free
Video Analysis (×N)100–250 eachDepends on video length (15–60 s)
Mave synthesis15–30Single research query
8-ad audit~850–2,050Conservative upper bound
4-ad audit~430–1,030Smaller batch
Start with your 4 most important ads to validate the pipeline at lower cost. Scale to the full quarter once you trust the scoring.
Video Analysis cost scales with video length. A 60-second ad costs ~2.5× a 15-second ad. If budget-constrained, trim videos to the first 30 seconds before upload.

See Also

Hook Analysis Sprint

Zoom into the first 3 seconds across 10 variants

Competitor Reel

Benchmark your ads against competitors

Video + Focus Group Double

Layer AI scoring with synthetic audience interpretation

Video Analysis

Metrics reference, chunk options, and chat endpoint

Mave Agent

Research agent with sources and validation

Credits & Budget

Pre-flight checks and budget alerts