Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Product managers write PRDs in Notion — feature descriptions, user stories, success metrics, and launch plans. Before committing engineering resources, you want synthetic validation: How would different user personas rate their interest? What objections would they raise? This job pulls PRD pages, creates user personas, and runs a Focus Group where each persona evaluates the proposed feature. Flow: Notion GET /blocks/{page_id}/children (PRD pages) → Extract requirements → Mavera POST /api/v1/personasPOST /api/v1/focus-groups → Interest ratings per persona

Code

import os, requests, time

NOTION = os.environ["NOTION_API_KEY"]
MV = os.environ["MAVERA_API_KEY"]
NB = "https://api.notion.com/v1"
MB = "https://app.mavera.io/api/v1"
NH = {
    "Authorization": f"Bearer {NOTION}",
    "Notion-Version": "2022-06-28",
    "Content-Type": "application/json",
}
MH = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

PRD_DB_ID = "your-prd-database-id"

# 1. Query PRD pages in "Review" status
prds = requests.post(f"{NB}/databases/{PRD_DB_ID}/query", headers=NH, json={
    "filter": {"property": "Status", "select": {"equals": "In Review"}},
    "page_size": 5,
}).json().get("results", [])

print(f"Found {len(prds)} PRDs in review")

# 2. Extract PRD content
def get_page_text(page_id):
    texts = []
    cursor = None
    while True:
        params = {"page_size": 100}
        if cursor:
            params["start_cursor"] = cursor
        r = requests.get(f"{NB}/blocks/{page_id}/children", headers=NH, params=params)
        if r.status_code == 429:
            time.sleep(1); continue
        data = r.json()
        for block in data.get("results", []):
            btype = block.get("type", "")
            rt = block.get(btype, {}).get("rich_text", [])
            text = "".join(t.get("plain_text", "") for t in rt)
            if text.strip():
                texts.append(text)
        cursor = data.get("next_cursor")
        if not cursor:
            break
        time.sleep(0.4)
    return "\n".join(texts)

# 3. Create user personas for validation
USER_SEGMENTS = [
    {"name": "Enterprise IT Director", "desc": "Manages 50+ person tech team. Evaluates tools for security, scalability, and ROI. Risk-averse. Needs exec-level justification."},
    {"name": "Startup Founder", "desc": "Wears many hats. Needs fast time-to-value. Price-sensitive but willing to pay for 10x improvements. Values simplicity."},
    {"name": "Marketing Manager (Mid-Market)", "desc": "Runs campaigns for a 200-person company. Juggles 5+ tools. Wants consolidation and better reporting. Reports to VP Marketing."},
    {"name": "Developer (IC)", "desc": "Individual contributor building integrations. Cares about API quality, documentation, and developer experience. Skeptical of 'AI magic' claims."},
    {"name": "Product Analyst", "desc": "Data-driven decision maker. Wants quantitative insights, A/B testing support, and export capabilities. Lives in dashboards."},
]

persona_ids = []
for seg in USER_SEGMENTS:
    p = requests.post(f"{MB}/personas", headers=MH, json={
        "name": f"PRD Review: {seg['name']}",
        "description": seg["desc"],
    }).json()
    persona_ids.append({"id": p["id"], "name": seg["name"]})
    time.sleep(0.3)

print(f"Created {len(persona_ids)} personas")

# 4. Run focus group for each PRD
for prd in prds:
    props = prd.get("properties", {})
    title_parts = props.get("Name", props.get("Title", {})).get("title", [])
    title = "".join(t.get("plain_text", "") for t in title_parts)
    prd_text = get_page_text(prd["id"])

    fg = requests.post(f"{MB}/focus-groups", headers=MH, json={
        "name": f"PRD Review: {title}",
        "persona_ids": [p["id"] for p in persona_ids],
        "questions": [
            f"Here is a product requirements document:\n\n{prd_text[:3000]}\n\nOn a scale of 1-10, how interested would you be in this feature? Explain your rating.",
            "What is the single biggest concern or objection you have about this feature?",
            "What would need to be true for you to adopt this on day one?",
            "How would you describe this feature to a colleague in one sentence?",
            "What existing alternative (if any) do you currently use to solve this problem?",
        ],
        "responses_per_persona": 2,
    }).json()

    # 5. Poll for results
    for _ in range(30):
        time.sleep(5)
        data = requests.get(f"{MB}/focus-groups/{fg['id']}", headers=MH).json()
        if data.get("status") == "completed":
            break

    print(f"\n{'='*60}\nPRD: {title}\n{'='*60}")
    for resp in data.get("responses", []):
        persona_name = next((p["name"] for p in persona_ids if p["id"] == resp.get("persona_id")), "Unknown")
        print(f"\n[{persona_name}] Q: {resp.get('question','')[:80]}...")
        print(f"  A: {resp.get('answer','')[:300]}")

Example Output

Found 2 PRDs in review
Created 5 personas

============================================================
PRD: Real-Time Collaboration Dashboard

[Enterprise IT Director] Q: On a scale of 1-10, how interested would you be...
  A: 7/10. Real-time collaboration is valuable but I need to understand the
     security model first. Does this support SSO? Can I restrict which dashboards
     are shared externally? Without granular permissions, this is a non-starter
     for regulated industries.

[Startup Founder] Q: On a scale of 1-10, how interested would you be...
  A: 9/10. This is exactly what we need. We're currently screenshotting
     dashboards and pasting them into Slack. Real-time would save our team
     2-3 hours per week in alignment meetings alone.

[Developer (IC)] Q: Biggest concern or objection?
  A: WebSocket-based real-time at scale is hard. What's the latency
     guarantee? What happens when connections drop? I'd want to see
     the API spec before committing any integration work.

[Product Analyst] Q: Current alternative?
  A: We export CSVs to Google Sheets and share those. It's clunky but
     everyone knows Sheets. You'd need to be significantly better than
     "good enough" to justify a switch.

Error Handling

Long PRDs may exceed the Focus Group context limit. The code truncates to 3000 chars. For detailed PRDs, summarize with Mave first, then pass the summary as context.
5 personas × 5 questions × 2 responses = 50 responses. This may take 2+ minutes. The polling loop allows up to 150 seconds. Increase for larger configurations.
PRD databases vary widely in schema. The code assumes a Status select and Name/Title title property. Inspect with GET /databases/{id} to verify your schema.