Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

What You’ll Learn

In this quickstart you will:
  • Upload a video to Mavera using the Files API (presigned URL flow) so it can be used as input for analysis.
  • Create a video analysis job with a title, goal, brand/product context, and analysis settings (chunk duration, frames per chunk).
  • Poll for completion and then read full-video metrics (overall score, emotional impact, attention, CTA effectiveness) and chunk-level breakdowns.
  • Optionally use the analysis chat endpoint to ask follow-up questions (e.g. “What are the weakest moments and how can I improve them?”).
Video Analysis is built for ad creatives, product videos, and any content where you want measurable engagement and emotional signals.
Time: About 20 minutes (upload is quick; analysis can take 2–10 minutes depending on video length). Credits: Approximately 100–500 credits depending on duration; see table below.

Prerequisites

Mavera account with an active subscription and enough credits (video analysis is credit-heavy).
API key from Developer Settings.
Workspace ID where the video file and analysis will live. Find it in the dashboard or via the workspaces API.
A video file on your machine (e.g. MP4, MOV). Short clips (e.g. 15–60 seconds) are best for a first run; max size is 2 GB for video files.

Overview of the Flow

1

Upload the video

Use the Files API: request a presigned upload URL, upload the file to that URL, then create a file record. You receive a file ID (used as asset_id for video analysis).
2

Create the analysis

Call POST /video-analyses with asset_id (your file ID), plus title, goal, brand, product, intent, and analysis options. You receive an analysis ID and initial status (e.g. PENDING).
3

Poll until completed

Call GET /video-analyses/{id} periodically until status is COMPLETED. Analysis typically takes a few minutes.
4

Read results

From the completed analysis, use results.full_video_metrics for overall scores and results.chunks for segment-by-segment breakdowns. Optionally use POST /video-analyses/{id}/chat to ask questions about the analysis.

Step 1: Upload the Video (Files API)

Videos must be in Mavera’s storage before you can analyze them. The Files API uses a presigned URL flow: you never send the file bytes to the Mavera API server; you upload directly to storage.

Step 1a: Request a presigned upload URL

Send file metadata (name, type, size, workspace_id) to get an upload_url and public_url.
import requests

API_KEY = "mvra_live_your_key_here"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
WORKSPACE_ID = "ws_your_workspace_id"
BASE = "https://app.mavera.io/api/v1"

# Path to your video file
VIDEO_PATH = "path/to/your/clip.mp4"

with open(VIDEO_PATH, "rb") as f:
    content = f.read()

file_size = len(content)
file_name = VIDEO_PATH.split("/")[-1]
file_type = "video/mp4"  # or video/quicktime for MOV

upload_resp = requests.post(
    f"{BASE}/files/upload-url",
    headers=HEADERS,
    json={
        "file_name": file_name,
        "file_type": file_type,
        "file_size": file_size,
        "workspace_id": WORKSPACE_ID,
    },
).json()

if "error" in upload_resp:
    raise Exception(upload_resp["error"]["message"])

upload_url = upload_resp["upload_url"]
public_url = upload_resp["public_url"]
print(f"Upload URL expires in {upload_resp.get('expires_in', '?')}s")

Step 1b: Upload the file to the presigned URL

Use a PUT request with the file body and the correct Content-Type.
put_resp = requests.put(
    upload_url,
    data=content,
    headers={"Content-Type": file_type},
)
put_resp.raise_for_status()

Step 1c: Create the file record

Register the file with Mavera so you get a stable file ID (this is the asset_id for video analysis).
file_resp = requests.post(
    f"{BASE}/files",
    headers=HEADERS,
    json={
        "name": file_name,
        "type": file_type,
        "url": public_url,
        "workspace_id": WORKSPACE_ID,
        "file_size": file_size,
    },
).json()

if "error" in file_resp:
    raise Exception(file_resp["error"]["message"])

asset_id = file_resp["id"]
print(f"File created; asset_id for video analysis: {asset_id}")
Presigned URLs expire (often within an hour). Upload the file and create the record soon after requesting the URL. If upload fails, request a new URL and retry.

Step 2: Create the Video Analysis

Pass the asset_id (your file ID) plus metadata and analysis options. The API returns an analysis ID and status (e.g. PENDING or RUNNING).
analysis_payload = {
    "title": "Quickstart: Ad Clip Analysis",
    "asset_id": asset_id,
    "goal": "Analyze viewer engagement and emotional response",
    "brand": "Your Brand",
    "product": "Product Name",
    "primary_intent": "Drive product awareness",
    "chunk_duration": 5,
    "frames_per_chunk": 3,
    "workspace_id": WORKSPACE_ID,
}

create_resp = requests.post(
    f"{BASE}/video-analyses",
    headers=HEADERS,
    json=analysis_payload,
).json()

if "error" in create_resp:
    raise Exception(create_resp["error"]["message"])

analysis_id = create_resp["id"]
status = create_resp["status"]
print(f"Analysis created: {analysis_id}")
print(f"Status: {status}")
ParameterDescription
asset_idThe file ID from the Files API (your uploaded video).
chunk_durationLength of each analyzed segment in seconds (e.g. 5).
frames_per_chunkNumber of frames analyzed per chunk (e.g. 3).
goal, brand, product, primary_intentContext used to improve relevance of metrics and recommendations.

Step 3: Poll Until Completed

Analysis runs asynchronously. Poll GET /video-analyses/{id} every 15–30 seconds until status is COMPLETED.
import time

def get_analysis(aid):
    r = requests.get(f"{BASE}/video-analyses/{aid}", headers=HEADERS)
    return r.json()

for _ in range(40):
    analysis = get_analysis(analysis_id)
    if "error" in analysis:
        raise Exception(analysis["error"]["message"])
    if analysis.get("status") == "COMPLETED":
        break
    print(f"Status: {analysis.get('status')}; waiting 15s...")
    time.sleep(15)
else:
    raise Exception("Analysis did not complete in time")

Step 4: Read the Results

Once status is COMPLETED, the response includes a results object with full-video metrics and chunks (segment-level data).
metrics = analysis["results"]["full_video_metrics"]
print("Overall score:", metrics.get("overall_score"))
print("Emotional impact:", metrics.get("emotional_impact"))
print("Attention score:", metrics.get("attention_score"))
print("CTA effectiveness:", metrics.get("cta_effectiveness"))
print("Brand recall likelihood:", metrics.get("brand_recall_likelihood"))

print("\nChunk breakdown:")
for chunk in analysis["results"].get("chunks", []):
    print(f"  {chunk['start_time']}s–{chunk['end_time']}s: engagement={chunk.get('engagement_score')}, valence={chunk.get('emotional_valence')}")

print("\nRecommendations:", analysis["results"].get("recommendations", []))
print("Credits used:", analysis.get("usage", {}).get("credits_used"))
Typical full_video_metrics fields:
FieldDescription
overall_scoreAggregate score (e.g. 0–100).
emotional_impactStrength of emotional response (e.g. 1–10).
attention_scoreHow well the video holds attention.
cta_effectivenessEffectiveness of call-to-action.
brand_recall_likelihoodEstimated likelihood of brand recall.
Chunks give you segment-level engagement, emotional valence, key moments, and optional recommendations per segment.

Step 5: Chat About the Analysis (Optional)

You can ask natural-language questions about the completed analysis (e.g. weak spots, how to improve). Use POST /video-analyses/{id}/chat.
chat_resp = requests.post(
    f"{BASE}/video-analyses/{analysis_id}/chat",
    headers=HEADERS,
    json={
        "message": "What are the weakest moments in this video and how can I improve them?"
    },
).json()

if "error" not in chat_resp:
    print(chat_resp.get("content", ""))

Credit Costs

Video lengthApproximate credits
< 30 seconds100–150
30 s – 1 min150–250
1–3 minutes250–400
3+ minutes400+
File upload and chat about results use additional credits; see Credits and the API reference.

Common Issues

Request a new upload URL and retry the PUT and file record creation. Don’t delay between getting the URL and uploading.
Use the id returned from POST /files after uploading. Ensure the file is in the same workspace you use in the analysis request.
Longer videos take longer. Poll for several minutes; if it never completes, check status or contact support.
Video files are supported up to 2 GB. Use a supported type (e.g. video/mp4, video/quicktime). See Files.
Video analysis is credit-intensive. Refill credits or use a shorter clip; see Credits.

Next Steps

Video Analysis

All metrics, chunk options, and response shapes

Files & Folders

Upload flow, folders, and using files with other APIs

Workspaces

Organize files and analyses by workspace

API Reference

Full video analysis request/response specification
Use video analysis to compare creatives, optimize ad length, or improve CTA placement—then iterate with new uploads and analyses.