Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Before deploying a new survey, you want to know: Are the questions clear? Are they biased? Will respondents drop off? This job pulls your existing survey structure from SurveyMonkey, creates personas that represent your target respondents, then runs a Focus Group where personas “take” the survey synthetically. They flag confusing questions, suggest rephrasing, and identify where respondent fatigue is likely. The result is an optimized survey before a single real respondent sees it. Flow: SurveyMonkey GET /v3/surveys/{id}/details → Extract question structure → Mavera POST /api/v1/focus-groups with personas “taking” the survey → “Was this confusing? How would you rephrase?” → Optimized question list

Architecture

Code

import os, requests, time

SM = os.environ["SURVEYMONKEY_TOKEN"]
MV = os.environ["MAVERA_API_KEY"]
SM_BASE = "https://api.surveymonkey.com/v3"
MB = "https://app.mavera.io/api/v1"
SM_H = {"Authorization": f"Bearer {SM}"}
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

SURVEY_ID = os.environ.get("SURVEY_ID", "your_survey_id")

# 1. Pull survey structure
survey = requests.get(f"{SM_BASE}/surveys/{SURVEY_ID}/details",
    headers=SM_H).json()
print(f"Survey: {survey.get('title', 'Untitled')}")

# 2. Extract questions in human-readable format
survey_text = []
q_num = 0
for page in survey.get("pages", []):
    page_title = page.get("title", "")
    if page_title:
        survey_text.append(f"\n--- Page: {page_title} ---")

    for q in page.get("questions", []):
        q_num += 1
        heading = q.get("headings", [{}])[0].get("heading", "Untitled")
        q_type = q.get("family", "unknown")
        subtype = q.get("subtype", "")
        required = "Required" if q.get("required", {}).get("text") else "Optional"

        choices = []
        for c in q.get("answers", {}).get("choices", []):
            choices.append(c.get("text", ""))
        for r in q.get("answers", {}).get("rows", []):
            choices.append(f"Row: {r.get('text', '')}")

        q_text = f"Q{q_num}. [{q_type}/{subtype}] [{required}] {heading}"
        if choices:
            q_text += f"\n     Options: {', '.join(choices[:10])}"
        survey_text.append(q_text)

full_survey = "\n".join(survey_text)
print(f"Questions: {q_num}")

# 3. Create respondent personas
RESPONDENT_PROFILES = [
    {"name": "SM: Busy Executive",
     "desc": "C-level, reads fast, low patience for long surveys. Will abandon if confused."},
    {"name": "SM: Detail-Oriented Analyst",
     "desc": "Data person. Wants precise questions. Will overthink ambiguous wording."},
    {"name": "SM: Non-Native English Speaker",
     "desc": "International respondent. Struggles with idioms, double negatives, jargon."},
]

persona_ids = []
for profile in RESPONDENT_PROFILES:
    p = requests.post(f"{MB}/personas", headers=MV_H, json={
        "name": profile["name"],
        "description": profile["desc"],
    }).json()
    persona_ids.append(p["id"])
    time.sleep(0.3)

# 4. Focus Group: personas "take" the survey
fg = requests.post(f"{MB}/focus-groups", headers=MV_H, json={
    "name": f"Survey Review: {survey.get('title', 'Survey')}",
    "persona_ids": persona_ids,
    "context": f"""You are about to take this survey. Read each question carefully.

FULL SURVEY:
{full_survey}""",
    "questions": [
        "Go through each question. Which ones confused you? For each confusing question, explain what's unclear and suggest a rephrasing.",
        "Are any questions leading or biased? Which ones and why?",
        "At what point in the survey would you consider abandoning it? Why?",
        "Which questions would you skip if they were optional? Why?",
        "Rate the overall survey experience (1-10) and explain your rating.",
        "If you could remove 3 questions without losing important data, which 3?",
        "Rewrite the single worst question in this survey.",
    ],
    "responses_per_persona": 3,
}).json()

# 5. Poll for results
for _ in range(20):
    time.sleep(5)
    result = requests.get(f"{MB}/focus-groups/{fg['id']}",
        headers=MV_H).json()
    if result.get("status") == "completed":
        break

print(f"\nFocus Group: {fg['id']}")
for resp in result.get("responses", []):
    print(f"\n[{resp.get('persona_name', '?')}] {resp.get('question', '')[:60]}...")
    print(f"  → {resp.get('answer', '')[:300]}")

Example Output

Survey: Product Feedback Q1 2026 (18 questions)
Focus Group: fg_sm_review_9k3n

[SM: Busy Executive] Which questions confused you?
  → Q7 asks "How satisfied are you with the platform's ability to
    integrate with your existing workflows?" — too vague. Which
    workflows? I have 12. Rephrase: "Does [Product] integrate with
    the tools you use daily? (Yes/No/Partially) If partially, which
    tools are missing?"

  → Q14 uses a 7-point scale where 4 is "Neither agree nor disagree."
    I never know what that means. Use 5-point or binary.

[SM: Non-Native English Speaker] Are any questions biased?
  → Q3: "Don't you agree that our onboarding was smooth?" The
    double negative plus "smooth" is leading. I'd say yes even if
    I disagreed. Rephrase: "How would you describe your onboarding
    experience?" with options from "Very difficult" to "Very easy."

[SM: Detail-Oriented Analyst] At what point would you abandon?
  → Page 3 (Q12-Q15). Four matrix questions in a row. Each has 8
    rows × 5 columns. That's 160 cells to fill. I'd close the tab.
    Reduce to the 3 most important rows per matrix.

[SM: Busy Executive] Rate overall experience (1-10).
  → 5/10. Too long (18 questions when 10 would suffice), too many
    matrix questions, and the open-ended fields don't have character
    limits so I don't know how much to write.

Error Handling

SurveyMonkey nests questions in pages. Matrix questions have rows, columns, and choices. The code extracts all levels but limits choices to 10 per question to keep the focus group prompt manageable.
The /details endpoint is a single call. Cache the survey structure locally — it rarely changes. This preserves daily quota for response-heavy jobs.
Surveys with 30+ questions produce very long context strings. Truncate to the most important 15-20 questions, or split into multiple focus groups by survey page.