Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

The best ad copy addresses real pain. Reddit is full of “frustrated with” threads where users describe problems in their own words. This job searches problem threads, extracts and ranks top 10 pain points via Mave, then generates targeted ad copy for each.

Architecture

Code

import os, requests, time

# --- Auth setup same as Job 1 ---

CATEGORY = "project management software"
SUBS = ["projectmanagement", "productivity", "startups", "smallbusiness"]
QUERIES = [f"frustrated with {CATEGORY}", f"hate {CATEGORY}", f"problem with {CATEGORY}",
           f"switching from {CATEGORY}", f"alternative to {CATEGORY}"]

# 1. Collect problem threads
threads = []
for q in QUERIES:
    for sub in SUBS:
        r = requests.get(f"{RD}/r/{sub}/search", headers=RD_H,
            params={"q": q, "restrict_sr": "true", "sort": "relevance", "t": "year", "limit": 5, "raw_json": 1})
        if r.status_code == 429: time.sleep(int(r.headers.get("X-Ratelimit-Reset", 60))); continue
        if not r.ok: continue
        for p in r.json().get("data",{}).get("children",[]):
            d = p["data"]
            if d.get("selftext") or d.get("title"):
                threads.append({"title": d["title"], "text": d.get("selftext","")[:400],
                    "subreddit": d.get("subreddit",""), "score": d.get("score",0)})
        time.sleep(0.7)

enriched = [f"[/r/{t['subreddit']}] {t['title']}\n{t['text'][:300]}"
            for t in sorted(threads, key=lambda x: -x["score"])[:15]]
print(f"Collected {len(threads)} problem threads across {len(SUBS)} subreddits")

# 2. Extract pain points
extraction = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"Analyze these Reddit complaints about {CATEGORY}. Extract top 10 pain points.\n\n"
        + "\n".join(enriched)
        + "\n\nFor each: name (3-5 words), frequency, emotional intensity (1-10), representative quote, underlying need. Rank by frequency × intensity."
}).json()
pain_text = extraction.get("content", "")
print(pain_text[:1200])

# 3. Generate ad copy per pain point
ads = requests.post(f"{MV_BASE}/generations", headers=MV_H, json={
    "prompt": f"From these Reddit pain points about {CATEGORY}, write targeted ad copy.\n\n{pain_text[:2000]}\n\n"
        "For each pain point:\n- Facebook: Headline (40 chars) + Primary text (125 chars) + Description (30 chars)\n"
        "- Google: H1 (30 chars) + H2 (30 chars) + Description (90 chars)\n"
        "- LinkedIn: Intro (150 chars) + Headline (70 chars)\n\nUse EXACT Reddit vocabulary. Address pain first.",
}).json()
print(f"\n{'='*60}\nAD COPY LIBRARY\n{'='*60}")
print(ads.get("output", ads.get("content", ""))[:1500])

Example Output

Collected 73 problem threads

1. Too Many Features (freq: 18, intensity: 8/10)
   "I just want to assign a task. Why do I need sprints and story points?"
2. Pricing Punishes Growth (freq: 14, intensity: 9/10)
   "We went from $0 to $2,400/month overnight."

AD COPY — Pain 1: Too Many Features
  Facebook: "Just Assign Tasks. No Epics Required." / Simple PM.
  Google: "Simple Task Management" / "No Sprints. No Epics."
  LinkedIn: Your PM tool shouldn't require a cert. / PM Without Overhead

AD COPY — Pain 2: Pricing Punishes Growth
  Facebook: "$0 to $2,400/mo Overnight? Not Here." / Fair pricing.
  Google: "Flat Pricing, Always" / "No Surprise Invoices"

Error Handling

5 queries × 4 subreddits = 20 API calls. With 700ms delays, ~14 seconds. For faster execution, parallelize per query.
Mave extraction works best with 15+ threads. For niche categories, broaden search or include adjacent categories.
Platform limits are enforced at upload, not generation. Always validate lengths before importing into ad platforms.

Reddit Integration

Mave Agent