Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Take industry research reports and use Claude to extract key findings, controversial claims, and actionable insights. Then auto-generate Focus Group questions that test whether your target personas would agree with or act on the report’s conclusions — turning static research into validated, audience-specific intelligence. Flow: Research report → Anthropic POST /v1/messages (extract findings) → Mavera POST /personas → Mavera POST /focus-groups (auto-generated questions) → Validated research

Code

import os, json, time, anthropic, requests

MV = os.environ["MAVERA_API_KEY"]
MV_BASE = "https://app.mavera.io/api/v1"
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}
client = anthropic.Anthropic()

with open("gartner_martech_2025.txt") as f:
    report_text = f.read()
print(f"Report loaded: {len(report_text):,} chars (~{len(report_text)//4:,} tokens)")

# 1. Claude extracts findings and generates questions
extraction = client.messages.create(
    model="claude-opus-4-6-20250725",
    max_tokens=4096,
    input=[{
        "role": "user",
        "content": "Research analyst. Read this report and:\n\n"
            "A) Extract TOP 8 FINDINGS ranked by how controversial or actionable they are.\n"
            "Per finding: claim (1 sentence), evidence cited, confidence, counterargument.\n\n"
            "B) For each finding, generate 2 FOCUS GROUP QUESTIONS that test practitioner agreement.\n"
            "Questions should be specific, neutral, and grounded in experience.\n\n"
            "Return JSON: [{finding, evidence, confidence, counterargument, questions: [q1, q2]}]\n\n"
            f"REPORT:\n{report_text}"
    }],
)
raw = extraction.content[0].text
print(f"Extraction — {extraction.usage.input_tokens:,} input tokens")

try:
    findings = json.loads(raw[raw.index("["):raw.rindex("]") + 1])
except (ValueError, json.JSONDecodeError):
    findings = [{"finding": raw[:500], "questions": ["Does this match your experience?"]}]
print(f"{len(findings)} findings, {sum(len(f.get('questions',[])) for f in findings)} questions")

# 2. Create target personas
TARGET_PERSONAS = [
    {"name": "VP Marketing, Enterprise SaaS", "desc": "15 years B2B marketing. $5M budget. Team of 20. Skeptical of vendor claims. Data-driven."},
    {"name": "Marketing Ops Manager", "desc": "8 years experience. Manages tech stack day-to-day. Cares about integration and automation."},
    {"name": "CMO, Growth-Stage Startup", "desc": "Series B. $1.5M budget. 6-person team. Needs efficiency over features."},
]
persona_ids = []
for tp in TARGET_PERSONAS:
    p = requests.post(f"{MV_BASE}/personas", headers=MV_H, json={
        "name": tp["name"], "description": tp["desc"],
    }).json()
    persona_ids.append(p["id"])
    time.sleep(0.3)

# 3. Collect questions and run Focus Group
all_questions = [q for f in findings[:6] for q in f.get("questions", [])[:2]]
fg = requests.post(f"{MV_BASE}/focus-groups", headers=MV_H, json={
    "name": "Research Validation: Gartner MarTech 2025",
    "persona_ids": persona_ids,
    "questions": all_questions[:10],
    "responses_per_persona": 2,
}).json()

for _ in range(30):
    time.sleep(5)
    data = requests.get(f"{MV_BASE}/focus-groups/{fg['id']}", headers=MV_H).json()
    if data.get("status") == "completed":
        break

# 4. Map responses to findings
print(f"\n{'='*60}\nRESEARCH VALIDATION RESULTS\n{'='*60}")
for i, f in enumerate(findings[:6]):
    print(f"\nFinding {i+1}: {f.get('finding','')[:120]}")
    print(f"Confidence: {f.get('confidence','N/A')}")
    for resp in data.get("responses", []):
        if resp.get("question","") in f.get("questions", []):
            idx = persona_ids.index(resp["persona_id"]) if resp.get("persona_id") in persona_ids else -1
            name = TARGET_PERSONAS[idx]["name"] if 0 <= idx < len(TARGET_PERSONAS) else "Unknown"
            print(f"  [{name}]: {resp.get('answer','')[:200]}")

Example Output

Report loaded: 142,830 chars (~35,707 tokens)
8 findings, 16 questions — Focus group: 3 personas

RESEARCH VALIDATION RESULTS
============================================================
Finding 1: "By 2027, 75% of enterprise marketers will consolidate
martech from 10+ tools to 3-5 platforms." (Confidence: High)

  [VP Marketing]: Still at 8 tools after 2 years of consolidating.
  Every "platform" has gaps we fill with point solutions.

  [Marketing Ops]: I'm the one migrating 3 years of Marketo workflows.
  The timeline is aggressive.

  [CMO, Startup]: Started with 3, already at 7. Need speed, not consolidation.

Finding 2: "AI-generated content = 30% of marketing output by 2026."
  [VP Marketing]: 20% for first drafts, 0% published-as-is. Depends
  on what "AI-generated" means — if AI-assisted, we're past it.

Error Handling

A 150-page report is ~35K tokens — well within limits. For 500+ page reports, use Claude Opus 4.6 with its 1M window. Analysis quality improves with full context versus chunked processing.
If generated questions feel generic, add constraints: “Questions must reference specific data points” and “Start with ‘In your experience…’ or ‘At your organization…’”
The response-to-finding mapping relies on exact question string matching. If Mavera modifies question text, fall back to index-based mapping using question order.