Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Mavera Surfaces

SurfaceRole
Files (POST /files/upload-url, POST /files)Upload each hook variant
Video Analysis (POST /video-analyses)Frame-level scoring with chunk_duration: 3 to isolate the hook
Mave (POST /mave/chat)Compare first-chunk metrics across variants and recommend winners

What Value Does Mavera Add?

ValueHow
InsuranceTest 10 hooks before committing media budget. Kill weak openers with data, not opinion.
Opening new doorsFirst-chunk emotional intensity and cognitive load scores reveal why a hook works — not just whether it does.
Saving timeRunning 10 hook variants through human A/B testing takes weeks. This sprint takes minutes.

When to Use This

  • You have a hero ad concept and need to decide which opening 3 seconds to ship.
  • You’re producing short-form content (TikTok, Reels, Shorts) where the hook is the ad.
  • You want to quantify the “scroll-stopping power” of different opening strategies.
  • You’re building a hook playbook for your creative team and need data to back guidelines.

What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Workspace IDFrom your dashboard URL (ws_...).
10 video variantsSame ad with 10 different hooks (first 3 seconds). MP4 or MOV, 6–15 s each.
Credits~100–200 per video + ~15–30 for Mave. See Credits Estimate.
Python 3.8+ or Node.js 18+requests for Python; native fetch for Node.
MAVERA_API_KEY=mvra_live_your_key_here
MAVERA_WORKSPACE_ID=ws_your_workspace_id
Why 10 variants? Fewer than 5 doesn’t give enough signal. More than 15 adds cost without proportional insight. 10 is the sweet spot for a sprint.

The Flow

1

Prepare 10 hook variants

Same base ad, 10 different openings. Vary strategy: question, statistic, pain point, humor, visual shock, testimonial, product close-up, text overlay, music sting, silent open.
2

Upload all variants via Files API

Name by hook strategy: hook_question.mp4, hook_humor.mp4.
3

Run Video Analysis with chunk_duration: 3

The first chunk isolates exactly the hook. Higher frames_per_chunk (5) gives denser sampling.
4

Extract and compare first-chunk metrics

Pull chunks[0] from each. Compare emotional_intensity, cognitive_load, engagement, attention.
5

Synthesize with Mave

Feed the comparison table into Mave: “Rank these hooks. Which opening strategy wins and why?”

Stage 1 — Upload Hook Variants

import os, time, json, glob, requests

API_KEY = os.environ["MAVERA_API_KEY"]
WORKSPACE_ID = os.environ["MAVERA_WORKSPACE_ID"]
BASE = "https://app.mavera.io/api/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}

HOOK_STRATEGIES = [
    "question", "statistic", "pain_point", "humor", "visual_shock",
    "testimonial", "product_closeup", "text_overlay", "music_sting", "silent_open",
]


def upload_video(path: str) -> dict:
    with open(path, "rb") as f:
        content = f.read()
    name = os.path.basename(path)
    mime = "video/mp4" if path.lower().endswith(".mp4") else "video/quicktime"

    url_resp = requests.post(f"{BASE}/files/upload-url", headers=HEADERS, json={
        "file_name": name, "file_type": mime, "file_size": len(content), "workspace_id": WORKSPACE_ID,
    }).json()
    if "error" in url_resp:
        raise Exception(url_resp["error"]["message"])

    requests.put(url_resp["upload_url"], data=content, headers={"Content-Type": mime}).raise_for_status()

    file_rec = requests.post(f"{BASE}/files", headers=HEADERS, json={
        "name": name, "type": mime, "url": url_resp["public_url"],
        "workspace_id": WORKSPACE_ID, "file_size": len(content),
    }).json()
    if "error" in file_rec:
        raise Exception(file_rec["error"]["message"])
    return {"id": file_rec["id"], "name": name}


def upload_hook_variants(directory: str) -> list[dict]:
    paths = sorted(glob.glob(os.path.join(directory, "*.mp4"))
                   + glob.glob(os.path.join(directory, "*.mov")))
    if not paths:
        raise FileNotFoundError(f"No video files in {directory}")
    assets = []
    for i, p in enumerate(paths):
        asset = upload_video(p)
        asset["strategy"] = HOOK_STRATEGIES[i] if i < len(HOOK_STRATEGIES) else f"variant_{i+1}"
        print(f"  [{i+1}/{len(paths)}] {asset['name']} ({asset['strategy']}) → {asset['id']}")
        assets.append(asset)
    return assets

Stage 2 — Video Analysis with Short Chunks

The key parameter: chunk_duration: 3. Combined with frames_per_chunk: 5 for maximum density in the hook window.
def create_hook_analysis(asset_id: str, strategy: str) -> dict:
    resp = requests.post(f"{BASE}/video-analyses", headers=HEADERS, json={
        "title": f"Hook Sprint: {strategy}", "asset_id": asset_id,
        "goal": "Measure emotional intensity, cognitive load, and attention in the opening hook",
        "brand": "Brand", "product": "Product",
        "primary_intent": "Stop the scroll and drive watch-through",
        "chunk_duration": 3, "frames_per_chunk": 5, "workspace_id": WORKSPACE_ID,
    }).json()
    if "error" in resp:
        raise Exception(resp["error"]["message"])
    return resp


def poll_analysis(analysis_id: str, timeout_min: int = 20) -> dict:
    for _ in range(timeout_min * 4):
        resp = requests.get(f"{BASE}/video-analyses/{analysis_id}", headers=HEADERS).json()
        if "error" in resp:
            raise Exception(resp["error"]["message"])
        if resp["status"] == "COMPLETED":
            return resp
        if resp["status"] == "FAILED":
            raise Exception(f"Analysis {analysis_id} failed")
        time.sleep(15)
    raise TimeoutError(f"Analysis {analysis_id} timed out")


def analyze_all_hooks(assets: list[dict]) -> list[dict]:
    jobs = []
    for asset in assets:
        job = create_hook_analysis(asset["id"], asset["strategy"])
        print(f"  Created analysis {job['id']} for {asset['strategy']}")
        jobs.append({"analysis_id": job["id"], "name": asset["name"], "strategy": asset["strategy"]})

    results = []
    for job in jobs:
        result = poll_analysis(job["analysis_id"])
        metrics = result.get("results", {}).get("full_video_metrics", {})
        results.append({"name": job["name"], "strategy": job["strategy"],
                        "metrics": metrics, "chunks": metrics.get("chunks", [])})
        print(f"  Completed {job['strategy']}")
    return results
Setting chunk_duration below 3 may not provide enough frames for reliable scoring. 3 seconds is the minimum recommended.

Stage 3 — Extract and Compare First-Chunk Metrics

The first chunk (chunks[0]) contains exactly the hook.
def extract_first_chunks(results: list[dict]) -> list[dict]:
    hook_data = []
    for r in results:
        chunk = r["chunks"][0] if r["chunks"] else {}
        hook_data.append({
            "strategy": r["strategy"], "name": r["name"],
            "emotional_intensity": chunk.get("emotional_intensity", 0),
            "cognitive_load": chunk.get("cognitive_load", 0),
            "engagement": chunk.get("engagement", 0),
            "attention": chunk.get("attention", 0),
        })
    return sorted(hook_data, key=lambda x: x["emotional_intensity"], reverse=True)


def compute_hook_score(h: dict) -> float:
    """Weighted composite: emotion 40% + engagement 30% + attention 20% + inverse cog. load 10%."""
    return (h["emotional_intensity"] * 0.4 + (h["engagement"] / 10) * 0.3
            + h["attention"] * 0.2 + (10 - h["cognitive_load"]) * 0.1)


def format_hook_table(hook_data: list[dict]) -> str:
    lines = ["| Rank | Hook Strategy | Emotion | Cog. Load | Engagement | Attention |",
             "|------|---------------|---------|-----------|------------|-----------|"]
    for i, h in enumerate(hook_data):
        lines.append(f"| {i+1} | {h['strategy']} | {h['emotional_intensity']}/10 "
                      f"| {h['cognitive_load']}/10 | {h['engagement']}/100 | {h['attention']}/10 |")
    return "\n".join(lines)


def format_composite_ranking(hook_data: list[dict]) -> str:
    scored = sorted(hook_data, key=compute_hook_score, reverse=True)
    lines = ["| Rank | Hook Strategy | Composite | Emotion | Engagement | Attention | Cog. Load |",
             "|------|---------------|-----------|---------|------------|-----------|-----------|"]
    for i, h in enumerate(scored):
        lines.append(f"| {i+1} | {h['strategy']} | {compute_hook_score(h):.1f}/10 "
                      f"| {h['emotional_intensity']}/10 | {h['engagement']}/100 "
                      f"| {h['attention']}/10 | {h['cognitive_load']}/10 |")
    return "\n".join(lines)

Composite Score Weights

MetricWeightWhy
Emotional intensity40%High-emotion hooks stop the scroll
Engagement30%Predicts watch-through
Attention20%Keeps the viewer in the first critical seconds
Inverse cognitive load10%Hooks requiring too much processing lose viewers
Cognitive load is inversely weighted — lower is better for hooks. A hook requiring mental effort to decode loses viewers before the message lands.

Stage 4 — Mave Synthesis

def generate_hook_report(hook_data: list[dict]) -> str:
    raw_table = format_hook_table(hook_data)
    composite = format_composite_ranking(hook_data)

    prompt = f"""You are a creative director specializing in short-form video hooks.

## First-Chunk Metrics (0–3 seconds)
{raw_table}

## Composite Hook Score
{composite}

## Your Task
Produce a hook optimization report:
1. **Winner** — Which hook strategy wins and why? Cite specific scores.
2. **Runner-Up** — Second-best hook and what it does differently.
3. **Worst Performer** — Which strategy failed and why?
4. **Emotional Intensity Analysis** — What drives high emotion in the first 3 seconds?
5. **Cognitive Load Traps** — Which hooks overloaded viewers?
6. **Pattern Recognition** — Patterns across top 3 and bottom 3 hooks?
7. **Hook Playbook** — 5 rules for scroll-stopping hooks, derived from this data.
8. **Recommended A/B Test** — 2 hooks for paid testing and the hypothesis to validate.

Reference strategies by name. Cite scores."""

    resp = requests.post(f"{BASE}/mave/chat", headers=HEADERS,
                         json={"message": prompt}, timeout=180).json()
    if "error" in resp:
        raise Exception(resp["error"]["message"])
    return resp["content"]

Running the Full Sprint

def run_hook_sprint(variant_directory: str = "./hook_variants"):
    assets = upload_hook_variants(variant_directory)
    results = analyze_all_hooks(assets)
    hook_data = extract_first_chunks(results)
    print(format_composite_ranking(hook_data))
    report = generate_hook_report(hook_data)

    with open("hook_analysis_report.md", "w") as f:
        f.write(f"# Hook Analysis Sprint — {time.strftime('%Y-%m-%d')}\n\n")
        f.write(format_composite_ranking(hook_data))
        f.write(f"\n\n---\n\n{report}")
    print("Saved to hook_analysis_report.md")
    return hook_data, report

if __name__ == "__main__":
    import sys
    run_hook_sprint(sys.argv[1] if len(sys.argv) > 1 else "./hook_variants")

Example Output

# Hook Analysis Sprint Report

**Variants tested:** 10 | **Date:** 2026-03-17

| Rank | Hook Strategy | Composite | Emotion | Cog. Load | Engagement | Attention |
|------|---------------|-----------|---------|-----------|------------|-----------|
| 1    | pain_point    | 8.8/10    | 9/10    | 3/10      | 87/100     | 9/10      |
| 2    | question      | 7.8/10    | 8/10    | 4/10      | 82/100     | 8/10      |
| 3    | visual_shock  | 7.5/10    | 9/10    | 6/10      | 79/100     | 9/10      |
| ...  | ...           | ...       | ...     | ...       | ...        | ...       |
| 10   | silent_open   | 3.5/10    | 3/10    | 2/10      | 31/100     | 4/10      |

## Winner
**pain_point** — composite 8.8/10. Highest emotion (9/10) with low cognitive
load (3/10). Emotional urgency + simplicity is the winning formula...

Variations

Run the same 10 hooks with different primary_intent values for TikTok vs YouTube to see if the same hooks win on both platforms.
After the sprint, run the top 3 hooks through a Focus Group for simulated audience preference data.
For YouTube pre-rolls or TV spots, change chunk_duration to 5.
Append each sprint to a CSV for quarter-over-quarter trends.
Create audio-muted and audio-only (black screen) versions. Compare to separate audio vs visual emotional drivers.

Credits Estimate

StageTypical CostNotes
File uploads (×10)0Free
Video Analysis (×10)100–200 eachShort videos (6–15 s) cost less
Mave synthesis15–30Single research query
10-variant sprint~1,015–2,030Conservative upper bound
5-variant sprint~515–1,030Smaller test batch
Short hook variants (6–10 seconds) keep Video Analysis costs low. You don’t need the full ad — just the hook plus a few seconds of context.

See Also

Ad Creative Audit

Score a full quarter of ads, not just hooks

Competitor Reel

Analyze competitor hooks alongside your own

Video + Focus Group Double

Layer hook scores with synthetic audience reactions

Video Analysis

Full metrics reference and chunk configuration

Mave Agent

Research agent for synthesis and recommendations

Credits & Budget

Pre-flight checks and budget alerts