Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

You’re about to send a campaign and you have 10 candidate subject lines. Instead of guessing, you pull historical campaign data (subject lines paired with open rates), then run a Focus Group where synthetic personas rate each new subject line on a 1–10 likelihood-to-open scale. The result is a ranked list of subject lines with qualitative explanations before you hit send.

Architecture

Code

import os, requests, time

MC_KEY = os.environ["MAILCHIMP_API_KEY"]
MC_DC = os.environ["MAILCHIMP_DC"]
MV = os.environ["MAVERA_API_KEY"]
MB = "https://app.mavera.io/api/v1"
MH = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}
MC_BASE = f"https://{MC_DC}.api.mailchimp.com/3.0"
mc_auth = ("anystring", MC_KEY)

r = requests.get(
    f"{MC_BASE}/campaigns",
    auth=mc_auth,
    params={"count": 30, "sort_field": "send_time", "sort_dir": "DESC", "status": "sent"},
)
r.raise_for_status()
campaigns = r.json().get("campaigns", [])

historical = []
for c in campaigns:
    report = c.get("report_summary", {})
    subject = c.get("settings", {}).get("subject_line", "")
    open_rate = report.get("open_rate", 0)
    if subject and open_rate > 0:
        historical.append({"subject": subject, "open_rate": open_rate})

historical.sort(key=lambda x: x["open_rate"], reverse=True)
avg_open = sum(h["open_rate"] for h in historical) / max(len(historical), 1)

history_block = "\n".join(
    f"  {h['open_rate']:.0%}{h['subject']}"
    for h in historical[:15]
)

NEW_SUBJECTS = os.environ.get("TEST_SUBJECTS", (
    "Your March metrics report is ready|"
    "3 trends your competitors are using right now|"
    "We made a mistake (and fixed it)|"
    "[Name], your account has a new recommendation|"
    "Stop doing this one thing in your campaigns|"
    "The data is in: what worked in Q1|"
    "🔥 New feature drop: AI subject line tester|"
    "Quick question about your workflow|"
    "You're leaving money on the table (proof inside)|"
    "Our CEO's unpopular opinion about email marketing"
)).split("|")

subject_list = "\n".join(f"  {i+1}. {s}" for i, s in enumerate(NEW_SUBJECTS))

PERSONA_IDS = os.environ.get("PREDICT_PERSONA_IDS", "").split(",")
if not PERSONA_IDS[0]:
    for name, desc in [
        ("Mobile Scanner", "Checks email on iPhone during commute. Reads 3-word subjects only. Opens 10% of marketing emails."),
        ("Desktop Power Reader", "Processes email in dedicated blocks. Reads everything. Opens 40% of marketing emails."),
        ("Selective Opener", "Gets 100+ emails/day. Opens only personalized or urgently relevant. 15% marketing open rate."),
        ("Curious Subscriber", "Relatively new subscriber. Opens most emails to evaluate. 60% open rate but declining."),
    ]:
        p = requests.post(f"{MB}/personas", headers=MH, json={"name": name, "description": desc}).json()
        PERSONA_IDS.append(p["id"])
        time.sleep(0.3)

fg = requests.post(f"{MB}/focus-groups", headers=MH, json={
    "name": "Mailchimp: Subject Line Open Rate Prediction",
    "persona_ids": [pid for pid in PERSONA_IDS if pid],
    "questions": [
        f"Rate each subject line on a 1-10 scale for how likely you are to open it (1 = ignore, 10 = open immediately). For each, give the score and a 1-sentence reason:\n{subject_list}",
        "Which 3 subject lines would you DEFINITELY open? Why those three specifically?",
        "Which subject lines would you DEFINITELY ignore or mark as spam? Why?",
        "If you could only open ONE email today and these were all in your inbox, which one wins? What's the deciding factor?",
    ],
    "context": f"""This is a subject line prediction study. Here's the brand's historical performance for context:

Average open rate: {avg_open:.0%}
Recent subject lines (ranked by open rate):
{history_block}

NEW SUBJECT LINES TO TEST:
{subject_list}

Rate honestly based on your email habits. Don't try to be nice — tell us what you'd actually do.""",
    "responses_per_persona": 1,
}).json()

for _ in range(30):
    time.sleep(5)
    data = requests.get(f"{MB}/focus-groups/{fg['id']}", headers=MH).json()
    if data.get("status") == "completed":
        break

print(f"Focus Group: {data.get('id')}{data.get('status')}\n")
for resp in data.get("responses", []):
    print(f"[{resp.get('persona_id','?')}] {resp.get('question','')[:70]}")
    print(f"  → {resp.get('answer','')[:500]}\n")

Example Output

Focus Group: fg_mc_predict_2k4j — completed

[Mobile Scanner] Rate 1-10 likelihood to open each:
  → 1. "Your March metrics report" — 3/10 (boring, sounds automated)
    2. "3 trends competitors are using" — 8/10 (competitive fear + number)
    3. "We made a mistake" — 9/10 (curiosity, vulnerability is rare)
    4. "[Name], your account has a recommendation" — 7/10 (personal + action)
    5. "Stop doing this one thing" — 6/10 (clickbaity but effective)
    6. "The data is in: what worked in Q1" — 5/10 (meh, no urgency)
    7. "🔥 New feature drop" — 4/10 (emoji = marketing = skip)
    8. "Quick question" — 7/10 (short, conversational, rare)
    9. "Leaving money on the table" — 6/10 (overused phrase)
    10. "CEO's unpopular opinion" — 8/10 (contrarian = interesting)

[Desktop Power Reader] Which 3 DEFINITELY open?
  → "We made a mistake" — vulnerability in marketing is so rare that
    it stands out. I want to know what happened.
    "3 trends competitors" — competitive intelligence is always
    valuable. The number 3 means it's a quick scan.
    "CEO's unpopular opinion" — contrarian content is the only
    marketing worth reading. I want to argue with them.

[Selective Opener] If only ONE, which wins?
  → "We made a mistake (and fixed it)." It's the only one that
    signals genuine honesty. Everything else is marketing — this
    one feels human. That parenthetical "(and fixed it)" tells
    me they're transparent about the problem AND the solution.

Error Handling

The prediction quality depends on having 15+ historical campaigns with diverse subject line styles. If your history is limited, the Focus Group still works but has less context for calibration.
For open rate prediction, use personas that match your actual subscriber demographics. The sample personas (Mobile Scanner, Power Reader, etc.) are archetypes — replace with personas from Job 1 for better accuracy.
Testing 10 subject lines in one question is the practical maximum. For more candidates, split into multiple Focus Groups of 10 each, or use a preliminary round to narrow to 5 finalists.

What’s Next

Mailchimp Integration

Back to Mailchimp integration overview

Re-Engagement Builder

Win back dormant subscribers

Focus Groups API

Full reference for POST /api/v1/focus-groups

All Integrations

50+ API integrations with full code