Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

When a topic trends, there’s a 4-8 hour window for disproportionate reach. This job pulls trending topics, filters for brand relevance via Mave, then generates content.

Architecture

Code

import os, requests, time

X = os.environ["X_BEARER_TOKEN"]; MV = os.environ["MAVERA_API_KEY"]
X_BASE = "https://api.x.com/2"; MV_BASE = "https://app.mavera.io/api/v1"
X_H = {"Authorization": f"Bearer {X}"}
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

WOEID = 23424977  # United States
BRAND = "B2B marketing analytics. Audience: marketing leaders. Tone: insightful, data-driven, slightly witty. Own: attribution, analytics, MarTech. Avoid: politics, crypto."

# 1. Fetch trends
r = requests.get(f"{X_BASE}/trends/by/woeid/{WOEID}", headers=X_H)
if r.status_code == 429:
    time.sleep(int(r.headers.get("x-rate-limit-reset", time.time()+60)) - int(time.time()))
    r = requests.get(f"{X_BASE}/trends/by/woeid/{WOEID}", headers=X_H)
if r.status_code == 403:
    print("Trends requires elevated access. Using fallback.")
    trends = [{"name": "marketing analytics", "tweet_volume": 0}]
else:
    r.raise_for_status()
    td = r.json()
    trends = [{"name": t.get("name",""), "tweet_volume": t.get("tweet_volume",0)}
              for t in td.get("data", td.get("trends",[]))[:30]]

# 2. Filter via Mave
trend_list = "\n".join(f"- {t['name']} (vol: {t['tweet_volume'] or 'N/A'})" for t in trends[:25])
relevance = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"Content strategist: review X trends for brand relevance.\n\nTRENDS:\n{trend_list}\n\nBRAND: {BRAND}\n\nScore each 1-10. Return only 7+ with angle, format, urgency."
}).json()

# 3. Content sprint
sprint = requests.post(f"{MV_BASE}/generations", headers=MV_H, json={
    "prompt": f"Content sprint from trend analysis.\n\n{relevance.get('content','')[:2000]}\n\n"
        "For each relevant trend:\n1. Tweet thread (5-7 tweets, <280 chars, hook first)\n"
        "2. LinkedIn post (150-200 words)\n3. Blog outline (title + 5 sections)\n\nTie to marketing analytics."
}).json()
print(sprint.get("output", sprint.get("content",""))[:2000])

Example Output

### #MarTechCollapse (9/10, ~6 hours to peak)

Tweet Thread:
1/ MarTech lost 3 players in 2 weeks. The real story isn't who died —
   it's what killed them. 🧵
2/ Avg team uses 12 tools (was 8). More ≠ better. More silos, headaches.
3/ Pattern: they solved a feature, not a workflow. Point solutions die.
4/ Audit: "If it disappeared, would we notice in a week?" If no → shelf-ware.
5/ Winners own the analytics layer — can't fake attribution with Zapier.
6/ We built [Product] for this moment. → [link]

LinkedIn: Three MarTech companies shut down in two weeks. The underlying
trend matters more than the headlines...

Error Handling

X trends peak in 4-8 hours. The sprint must execute in under 30 minutes. Pre-warm Mavera by caching brand context.
Some trending topics carry hidden context. Add human review for topics scoring 7-8 before publishing.

X / Twitter Integration

Mave Agent