Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Your SurveyMonkey survey collected 1,200 responses across 20 questions — multiple choice, ratings, open-ended. Manually analyzing this would take days. This job pulls all responses via the bulk endpoint, structures them for analysis, then sends the entire dataset to Mave Agent with the instruction: “Analyze responses. Identify statistical patterns, sentiment trends, and actionable recommendations.” The result is an AI-generated research report that would normally require a dedicated analyst. Flow: SurveyMonkey GET /v3/surveys/{id}/responses/bulk → Aggregate and structure → Mave POST /api/v1/mave/chat: “Analyze responses. Identify statistical patterns, sentiment trends, recommendations.” → Research report

Architecture

Code

import os, json, requests, time
from collections import Counter, defaultdict

SM = os.environ["SURVEYMONKEY_TOKEN"]
MV = os.environ["MAVERA_API_KEY"]
SM_BASE = "https://api.surveymonkey.com/v3"
MB = "https://app.mavera.io/api/v1"
SM_H = {"Authorization": f"Bearer {SM}", "Content-Type": "application/json"}
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

SURVEY_ID = os.environ.get("SURVEY_ID", "your_survey_id")

# 1. Get survey structure
survey = requests.get(f"{SM_BASE}/surveys/{SURVEY_ID}/details",
    headers=SM_H).json()
print(f"Survey: {survey.get('title', 'Untitled')} ({survey.get('response_count', 0)} responses)")

# Build question/answer lookup
questions = {}
choice_labels = {}
for page in survey.get("pages", []):
    for q in page.get("questions", []):
        qid = q["id"]
        questions[qid] = {
            "title": q.get("headings", [{}])[0].get("heading", ""),
            "type": q.get("family", ""),
            "subtype": q.get("subtype", ""),
        }
        for row in q.get("answers", {}).get("rows", []):
            choice_labels[row["id"]] = row.get("text", "")
        for choice in q.get("answers", {}).get("choices", []):
            choice_labels[choice["id"]] = choice.get("text", "")
        for col in q.get("answers", {}).get("columns", []):
            choice_labels[col["id"]] = col.get("text", col.get("label", ""))

# 2. Pull all responses (paginated)
all_responses = []
page_num = 1
while True:
    r = requests.get(f"{SM_BASE}/surveys/{SURVEY_ID}/responses/bulk",
        headers=SM_H, params={"page": page_num, "per_page": 100})
    if r.status_code == 429:
        retry = int(r.headers.get("X-Ratelimit-App-Global-Day-Reset", 60))
        print(f"Rate limited — daily limit. Retry in {retry}s")
        time.sleep(min(retry, 60))
        continue
    r.raise_for_status()
    data = r.json()
    batch = data.get("data", [])
    all_responses.extend(batch)

    total = data.get("total", 0)
    if len(all_responses) >= total or not batch:
        break
    page_num += 1
    time.sleep(0.6)

print(f"Fetched {len(all_responses)} responses")

# 3. Aggregate answers by question
qa_data = defaultdict(list)
for resp in all_responses:
    for page_data in resp.get("pages", []):
        for q_data in page_data.get("questions", []):
            qid = q_data["id"]
            for ans in q_data.get("answers", []):
                if "text" in ans:
                    qa_data[qid].append(ans["text"])
                elif "choice_id" in ans:
                    label = choice_labels.get(ans["choice_id"], ans["choice_id"])
                    qa_data[qid].append(label)
                    if "row_id" in ans:
                        row = choice_labels.get(ans["row_id"], "")
                        qa_data[qid][-1] = f"{row}: {label}"

# 4. Build analysis summary
summary_parts = []
for qid, answers in qa_data.items():
    q_info = questions.get(qid, {})
    title = q_info.get("title", qid)
    q_type = q_info.get("type", "")

    if q_type in ("single_choice", "multiple_choice", "matrix"):
        counts = Counter(answers).most_common(10)
        dist = ", ".join(f"{label}: {count} ({count/len(answers)*100:.0f}%)"
                        for label, count in counts)
        summary_parts.append(f"**{title}** (n={len(answers)})\n  {dist}")
    elif q_type == "open_ended":
        sample = "; ".join(answers[:15])[:500]
        summary_parts.append(f"**{title}** (n={len(answers)}, open-ended)\n  Samples: {sample}")
    else:
        try:
            nums = [float(a) for a in answers if a.replace(".", "").replace("-", "").isdigit()]
            if nums:
                avg = sum(nums) / len(nums)
                summary_parts.append(f"**{title}** (n={len(nums)}, avg={avg:.1f})")
        except ValueError:
            summary_parts.append(f"**{title}** (n={len(answers)})")

summary = "\n\n".join(summary_parts)

# 5. Mave analysis
analysis = requests.post(f"{MB}/mave/chat", headers=MV_H, json={
    "message": f"""Analyze {len(all_responses)} survey responses from "{survey.get('title', 'Survey')}".

QUESTION-BY-QUESTION DATA:
{summary}

Tasks:
1) Statistical patterns: Correlations between questions, unexpected distributions
2) Sentiment trends: Overall and per-question sentiment analysis
3) Key findings: Top 5 insights ranked by impact
4) Segment differences: Any visible subgroups in the data
5) Red flags: Responses that indicate problems or risks
6) Recommendations: 5 actionable next steps based on the data
7) Suggested follow-up questions: What should you ask next?"""
}).json()

print(f"\n{'='*60}")
print(f"ANALYSIS: {survey.get('title', 'Survey')} ({len(all_responses)} responses)")
print(f"{'='*60}")
print(analysis.get("content", "")[:3000])
print(f"\nSources: {len(analysis.get('sources', []))}")

Example Output

============================================================
ANALYSIS: Q1 Customer Satisfaction Survey (1,247 responses)
============================================================

## Key Findings

### 1. Bimodal Satisfaction (Critical)
Overall satisfaction shows two peaks: 8-9/10 (43%) and 3-4/10 (22%).
The middle is thin — customers either love you or are frustrated.
No "passive middle" suggests polarizing experiences.

### 2. Onboarding is the Breakpoint
92% of dissatisfied respondents (≤5/10) cite onboarding as their
primary pain. Satisfaction jumps from 4.1 → 8.3 after 30-day mark.
Intervention window: days 3-14.

### 3. Feature Satisfaction ≠ Retention Intent
Reporting module scores 8.2/10 in satisfaction but is cited by 38%
of "likely to churn" respondents as "not enough." High expectations,
not low quality, drives churn risk.

### 4. Support Channel Preference Shift
Under-35 respondents (34% of base) prefer chat (67%) over email (12%).
Over-45 prefer email (58%) over chat (21%). Your support channel
allocation doesn't match.

### 5. Open-Ended Red Flag
"We're evaluating alternatives" appears in 14% of detractor comments.
Cross-reference with CRM to identify at-risk accounts.

## Recommendations
1. Launch 14-day onboarding drip with check-in at day 3, 7, 14
2. Shift 30% of support capacity from email to chat
3. Create "power user" track for reporting module
4. Run churn risk analysis on detractor accounts
5. Deploy NPS follow-up survey targeting the "3-4" cohort

Sources: 0

Error Handling

Private apps are capped at 500 requests per day. The bulk endpoint returns 100 responses per page, so a 2,000-response survey needs 20 calls. Cache responses locally after the first pull.
Matrix questions nest rows and columns. The code concatenates “Row: Column” labels. For complex matrices (10×10), the summary may be verbose — truncate to top 5 combinations.
If choice_labels doesn’t find a match, the raw choice ID is used. This happens with custom “Other” options. Pre-populate the lookup from the survey detail endpoint.
Surveys with 20+ questions and 1,000+ responses generate large summaries. The code caps open-ended samples at 15 and text at 500 chars. For very large surveys, analyze in sections.