Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Mavera Surfaces

SurfaceRole
Files (POST /files/upload-url, POST /files)Upload the video creative
Video Analysis (POST /video-analyses)Frame-level AI scoring: engagement, emotion, attention, brand recall, CTA
Focus Groups (POST /focus-groups)Simulated audience panel reacts to the video and the AI scores
Mave (POST /mave/chat)Synthesize both layers into a single insight report

What Value Does Mavera Add?

ValueHow
InsuranceTwo independent methods catch what either alone would miss. AI finds frame-level issues; personas find emotional disconnects.
Opening new doorsUsing AI scores as stimulus for a Focus Group creates a feedback loop: “The AI says high emotion but low brand recall — do you agree?” Personas explain why.
Saving timeA traditional creative test requires real audience recruitment. This delivers comparable depth in minutes.

When to Use This

  • You have a new creative and want both quantitative scores and qualitative interpretation before launch.
  • Video Analysis returned surprising results (high emotion, low brand recall) and you need to understand why.
  • You need more than numbers for stakeholders — they want to hear “what the audience thinks.”
  • You’re testing a risky concept and need two independent signals before committing budget.
This is the most thorough single-ad analysis in the playbook library. It combines Video Analysis depth with Focus Group interpretive power, then synthesizes both with Mave.

What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Workspace IDFrom your dashboard URL (ws_...).
Persona ID(s)At least one persona matching your target audience.
One video creativeMP4 or MOV, 15–60 s.
Credits~100–250 (Video) + ~75–125 (Focus Group) + ~15–30 (Mave). See Credits Estimate.
Python 3.8+ or Node.js 18+requests for Python; native fetch for Node.
MAVERA_API_KEY=mvra_live_your_key_here
MAVERA_WORKSPACE_ID=ws_your_workspace_id
TARGET_PERSONA_ID=persona_your_target

The Flow

1

Upload the video

Standard three-step Files API upload.
2

Run Video Analysis

Get frame-level scores. These become the raw data layer.
3

Extract stimulus from AI scores

Find tensions — places where one metric is high but another is low. These become Focus Group prompts.
4

Run Focus Group with AI stimulus

Present the video AND the AI findings to a 25-person panel: “The AI says this. Do you agree?”
5

Synthesize with Mave

Feed both layers into Mave for a combined insight report.

Stage 1 — Upload + Video Analysis

import os, time, json, requests

API_KEY = os.environ["MAVERA_API_KEY"]
WORKSPACE_ID = os.environ["MAVERA_WORKSPACE_ID"]
PERSONA_ID = os.environ["TARGET_PERSONA_ID"]
BASE = "https://app.mavera.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}


def upload_video(path: str) -> dict:
    with open(path, "rb") as f:
        content = f.read()
    name = os.path.basename(path)
    mime = "video/mp4" if path.lower().endswith(".mp4") else "video/quicktime"

    url_resp = requests.post(f"{BASE}/files/upload-url", headers=HEADERS, json={
        "file_name": name, "file_type": mime, "file_size": len(content), "workspace_id": WORKSPACE_ID,
    }).json()
    if "error" in url_resp:
        raise Exception(url_resp["error"]["message"])

    requests.put(url_resp["upload_url"], data=content, headers={"Content-Type": mime}).raise_for_status()

    file_rec = requests.post(f"{BASE}/files", headers=HEADERS, json={
        "name": name, "type": mime, "url": url_resp["public_url"],
        "workspace_id": WORKSPACE_ID, "file_size": len(content),
    }).json()
    if "error" in file_rec:
        raise Exception(file_rec["error"]["message"])
    return {"id": file_rec["id"], "name": name}


def create_analysis(asset_id: str, label: str) -> dict:
    resp = requests.post(f"{BASE}/video-analyses", headers=HEADERS, json={
        "title": f"Double Analysis: {label}", "asset_id": asset_id,
        "goal": "Comprehensive assessment: engagement, emotional arc, brand recall, CTA",
        "brand": "Brand", "product": "Product", "primary_intent": "Drive purchase consideration",
        "chunk_duration": 5, "frames_per_chunk": 3, "workspace_id": WORKSPACE_ID,
    }).json()
    if "error" in resp: raise Exception(resp["error"]["message"])
    return resp

def poll_analysis(analysis_id: str, timeout_min: int = 20) -> dict:
    for _ in range(timeout_min * 4):
        resp = requests.get(f"{BASE}/video-analyses/{analysis_id}", headers=HEADERS).json()
        if "error" in resp: raise Exception(resp["error"]["message"])
        if resp["status"] == "COMPLETED": return resp
        if resp["status"] == "FAILED": raise Exception(f"Analysis {analysis_id} failed")
        time.sleep(15)
    raise TimeoutError(f"Analysis {analysis_id} timed out")

Stage 2 — Extract Stimulus from AI Scores

This is the key step. Transform raw metrics into natural-language statements. The best stimulus highlights tensions — where one metric is high but another is low.
def extract_stimulus(metrics: dict) -> list[dict]:
    """Find tensions in metrics — contradictory pairs are the best Focus Group prompts."""
    o, e, a = metrics.get("overall_score", 0), metrics.get("emotional_impact", 0), metrics.get("attention_score", 0)
    br, cta = metrics.get("brand_recall_likelihood", "MEDIUM"), metrics.get("cta_effectiveness", 0)
    chunks = metrics.get("chunks", [])
    stimuli = [{"finding": f"Overall score: {o}/100 ({'top' if o >= 75 else 'middle' if o >= 50 else 'bottom'} tier).", "tension_level": 0}]

    if e >= 7 and br in ("LOW", "VERY_LOW"):
        stimuli.append({"finding": f"High emotion ({e}/10) but low brand recall ({br}). Feels something, can't name who.", "tension_level": 3})
    elif e <= 4 and br in ("HIGH", "VERY_HIGH"):
        stimuli.append({"finding": f"Strong recall ({br}) but low emotion ({e}/10). Knows the brand, doesn't care.", "tension_level": 3})
    if a >= 8 and cta <= 4:
        stimuli.append({"finding": f"High attention ({a}/10) but weak CTA ({cta}/10). Holds eyes, doesn't convert.", "tension_level": 3})
    if chunks:
        f_eng, l_eng = chunks[0].get("engagement", 0), chunks[-1].get("engagement", 0)
        if f_eng >= 70 and l_eng <= 40:
            stimuli.append({"finding": f"Engagement drops from {f_eng} to {l_eng}. Hook works, ad loses people.", "tension_level": 2})
        peak = max(chunks, key=lambda c: c.get("emotional_intensity", 0))
        stimuli.append({"finding": f"Peak emotion at {peak.get('start_time', 0)}s ({peak.get('emotional_intensity', 0)}/10).", "tension_level": 1})

    stimuli.append({"finding": f"Emotion: {e}/10, Attention: {a}/10, Recall: {br}, CTA: {cta}/10.", "tension_level": 0})
    return sorted(stimuli, key=lambda s: s["tension_level"], reverse=True)

def format_stimulus(stimuli):
    return "\n".join(f"{'⚡' if s['tension_level'] >= 2 else '📊'} {s['finding']}" for s in stimuli)
The stimulus logic looks for tensions — contradictory metric pairs. These are the most productive Focus Group prompts because they force personas to explain nuance that raw scores can’t capture.

Stage 3 — Focus Group with AI Stimulus

Questions reference the AI scores directly. High-tension findings become probing prompts.
def build_focus_group_questions(stimuli: list[dict]) -> list[dict]:
    questions = [
        {"question": "Watch this ad. First impression? Would you keep watching or scroll past?", "type": "OPEN_ENDED", "order": 1},
        {"question": "0-10, how likely to recommend this product after watching?", "type": "NPS", "order": 2},
    ]
    order = 3
    for s in [s for s in stimuli if s["tension_level"] >= 2][:3]:
        questions.append({"question": f'AI analysis found: "{s["finding"]}" — Do you agree? Why or why not?',
                          "type": "OPEN_ENDED", "order": order})
        order += 1
    questions.append({"question": "What single change would improve this ad most?", "type": "OPEN_ENDED", "order": order})
    order += 1
    questions.append({"question": "Do you trust the AI's assessment?", "type": "MULTIPLE_CHOICE",
                      "options": ["Yes", "Partially", "No", "Need context"], "order": order})
    return questions


def run_focus_group(asset_id: str, stimuli: list[dict]) -> dict:
    resp = requests.post(f"{BASE}/focus-groups", headers=HEADERS, json={
        "name": "Double Analysis", "sample_size": 25, "persona_ids": [PERSONA_ID],
        "workspace_id": WORKSPACE_ID, "assets": [{"id": asset_id, "label": "Ad Under Review"}],
        "questions": build_focus_group_questions(stimuli),
    }).json()
    if "error" in resp: raise Exception(resp["error"]["message"])
    return resp

def poll_focus_group(fg_id: str, timeout_min: int = 20) -> dict:
    for _ in range(timeout_min * 6):
        resp = requests.get(f"{BASE}/focus-groups/{fg_id}", headers=HEADERS).json()
        if "error" in resp: raise Exception(resp["error"]["message"])
        if resp["status"] == "COMPLETED": return resp
        time.sleep(10)
    raise TimeoutError(f"Focus group {fg_id} timed out")
The most powerful question pattern: “The AI says [specific finding]. Do you agree? Why or why not?” This forces personas to engage with data rather than give generic reactions.

Stage 4 — Mave Synthesis

Feed both layers in. Mave produces a report that explains the numbers with audience-level insight.
def format_fg_results(fg_results: dict) -> str:
    lines = []
    for r in fg_results.get("results", []):
        lines.append(f"### Q: {r['question']}")
        if r["type"] == "NPS": lines.append(f"NPS: {r.get('nps_score', 'N/A')}")
        if r.get("summary"): lines.append(r["summary"])
        lines.append("")
    return "\n".join(lines)


def generate_layered_report(metrics: dict, stimuli: list[dict], fg_results: dict) -> str:
    m = metrics
    metrics_line = f"Overall: {m.get('overall_score', '?')}/100 | Emotion: {m.get('emotional_impact', '?')}/10 | Attention: {m.get('attention_score', '?')}/10 | Recall: {m.get('brand_recall_likelihood', '?')} | CTA: {m.get('cta_effectiveness', '?')}/10"
    chunks_text = "\n".join(f"  {c.get('start_time', 0)}{c.get('end_time', 5)}s: eng={c.get('engagement', '?')}, emo={c.get('emotional_intensity', '?')}"
                            for c in m.get("chunks", []))

    prompt = f"""You are a senior creative analyst producing a layered video ad analysis.

## Layer 1: AI Video Analysis
**Metrics:** {metrics_line}
**Chunks:**
{chunks_text or "No chunk data."}
**Findings:**
{format_stimulus(stimuli)}

## Layer 2: Focus Group (25 respondents, shown ad + AI findings)
{format_fg_results(fg_results)}

## Your Task — Layered Analysis
1. **Executive Summary** — 3 sentences no single layer could produce.
2. **Where AI and Audience Agree** — Highest-confidence insights.
3. **Where They Disagree** — Which signal to trust and why.
4. **Tension Resolution** — Audience reaction to each high-tension finding.
5. **Emotional Journey** — Chunk data + audience descriptions combined.
6. **Brand Recall** — AI score vs what audience remembers.
7. **CTA Assessment** — AI score vs NPS.
8. **The One Change** — Highest-impact change from both layers.
9. **Final Verdict** — Ship, iterate, or rethink?

Reference AI scores and Focus Group summaries together."""

    resp = requests.post(f"{BASE}/mave/chat", headers=HEADERS,
                         json={"message": prompt}, timeout=180).json()
    if "error" in resp:
        raise Exception(resp["error"]["message"])
    return resp["content"]

Running the Full Pipeline

def run_double_analysis(video_path: str = "./new_creative.mp4"):
    asset = upload_video(video_path)
    analysis = create_analysis(asset["id"], asset["name"])
    result = poll_analysis(analysis["id"])
    metrics = result.get("results", {}).get("full_video_metrics", {})
    print(f"AI Score: {metrics.get('overall_score')}/100")

    stimuli = extract_stimulus(metrics)
    print(format_stimulus(stimuli))

    fg = run_focus_group(asset["id"], stimuli)
    fg_result = poll_focus_group(fg["id"])
    report = generate_layered_report(metrics, stimuli, fg_result)

    with open("double_analysis_report.md", "w") as f:
        f.write(f"# Double Analysis — {asset['name']}\n\n{format_stimulus(stimuli)}\n\n---\n\n{report}")
    print("Saved to double_analysis_report.md")
    return {"metrics": metrics, "stimuli": stimuli, "fg_result": fg_result, "report": report}

if __name__ == "__main__":
    import sys
    run_double_analysis(sys.argv[1] if len(sys.argv) > 1 else "./new_creative.mp4")

Example Output

# Video + Focus Group Double Analysis

**Ad:** spring_launch_30s.mp4 | **AI Score:** 72/100

## AI Stimulus
⚡ High emotion (8/10) but low brand recall (LOW). Feel something, can't name who.
⚡ High attention (8/10) but weak CTA (3/10). Holds attention, doesn't convert.

## Executive Summary
The combination reveals a creative that *moves* people but fails to *brand*
them. 18 of 25 Focus Group respondents described how the ad made them feel,
but only 7 named the brand without prompting...

## Where AI and Audience Disagree
AI rated CTA at 3/10, but 15 respondents said they'd "probably visit the
website." AI measures CTA clarity/placement; audience responds to overall
persuasion — different valid lenses on the same ad...

Variations

Use 3–5 persona IDs for segment-level reactions to the AI findings.
Run the full pipeline on both Creative A and Creative B. Compare layered reports side by side.
After the first analysis, make changes, re-upload, run again. Compare reports to measure improvement.
For stakeholder meetings, use the AI stimulus + Focus Group results as-is without Mave synthesis.
If all metrics are moderate, manually inject: “All metrics are moderate (50–70). Is this ad ‘safe but forgettable’?”

Credits Estimate

StageTypical CostNotes
File upload0Free
Video Analysis100–250Depends on video length
Focus Group (N=25, 6–8 questions)75–150Sample size + question count
Mave synthesis15–30Large context from both layers
Single-ad double analysis~190–430Conservative range
Reserve this for high-stakes creatives — hero campaigns, product launches, brand films. For routine testing, use Ad Creative Audit instead.

See Also

Ad Creative Audit

Score multiple ads without the Focus Group layer

Hook Analysis Sprint

Deep-dive into the first 3 seconds across 10 variants

Competitor Reel

Analyze competitor ads with the same Video Analysis pipeline

Video Analysis

Metrics reference, chunk options, and chat endpoint

Focus Groups

All 12 question types and audience simulation

Mave Agent

Research agent for synthesis and recommendations