Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Mavera Surfaces Used

SurfaceRole
News Intelligence (GET /news, POST /news/search)Monitor industry news feeds and detect significant stories
Mave Agent (POST /mave/chat)Deep research triggered by breaking news — market impact, opportunities, threats
Mave Threads (POST /mave/chat with thread_id)Multi-turn follow-up to drill into specific implications
Chat + response_formatStructure research into a standardized strategic intelligence brief
This playbook creates a monitoring loop: News API detects relevant stories, significance is scored, and high-impact events automatically trigger Mave Agent research. The output is real-time strategic intelligence — not just news alerts, but analyzed implications for your market position.

What Value Does Mavera Add?

ValueHow
InsuranceNever be blindsided by market shifts. Automated monitoring catches stories your team would miss.
Opening new doorsTurn breaking news into strategic advantage. While competitors react, you’ve already analyzed implications.
Saving timeReplaces manual news monitoring + analyst interpretation. A story breaks → you have an analysis in minutes.

When to Use This

  • You operate in a fast-moving market where competitor moves, regulatory changes, or funding events impact your strategy.
  • You want automated intelligence that goes beyond alerts — you need analyzed implications, not just headlines.
  • You’re preparing for board meetings and need a current-state market briefing on demand.
  • You want to build a strategic intelligence archive that grows over time.

What You Need

RequirementDetails
Mavera API keyStarts with mvra_live_. Get one at Developer Settings.
Workspace IDFrom your dashboard URL (ws_...).
Industry keywordsSearch terms that match your market (e.g. “AI market research”, “synthetic audiences”, “persona validation”).
Significance thresholdMinimum score (1-10) to trigger deep research. Default: 7.
Credits~50–200 per triggered research. Monitoring costs vary. See Credits Estimate.
Python 3.8+ or Node.js 18+requests / openai for Python; native fetch for Node.
MAVERA_API_KEY=mvra_live_your_key_here
MAVERA_WORKSPACE_ID=ws_your_workspace_id
SIGNIFICANCE_THRESHOLD=7
NEWS_KEYWORDS=AI market research,synthetic audiences,persona validation,focus group automation

The Pipeline

Significance Scoring Criteria

Not every news story warrants deep research. The scoring criteria:
FactorWeightExamples
Direct competitor actionHighCompetitor raises $50M, launches competing feature
Regulatory changeHighNew data privacy law, industry regulation
Market shiftMediumCustomer segment behavior change, new market entrant
Technology trendMediumNew AI capability, platform shift
Tangential mentionLowIndustry mentioned in passing, opinion pieces

The Flow

1

Configure news monitoring

Set your industry keywords, competitor names, and monitoring frequency. Keywords should be specific enough to avoid noise but broad enough to catch relevant stories.
2

Fetch recent news

Query the News API for stories matching your keywords. Filter by recency (last 24h, 7d, etc.) and relevance.
3

Score significance

Use Chat with structured output to score each story’s significance to your business (1-10). Filter by your threshold.
4

Trigger Mave research

For stories above the threshold, launch a Mave Agent research thread. The prompt includes the story details and asks specific strategic questions.
5

Structure the intelligence brief

Use Chat with structured output to format the research into a standardized brief with impact assessment, opportunities, threats, and recommended actions.
6

Archive and notify

Save the brief and optionally trigger notifications (Slack, email, etc.). Build an intelligence archive over time.

Code: Full News-Triggered Research Pipeline

Setup and Configuration

import os
import json
import time
from datetime import datetime, timedelta
import requests
from openai import OpenAI

MAVERA_API_KEY = os.environ["MAVERA_API_KEY"]
WORKSPACE_ID = os.environ["MAVERA_WORKSPACE_ID"]
BASE = "https://app.mavera.io/api/v1"
HEADERS = {
    "Authorization": f"Bearer {MAVERA_API_KEY}",
    "Content-Type": "application/json",
}
mavera = OpenAI(api_key=MAVERA_API_KEY, base_url=BASE)

SIGNIFICANCE_THRESHOLD = int(os.environ.get("SIGNIFICANCE_THRESHOLD", "7"))

NEWS_KEYWORDS = os.environ.get(
    "NEWS_KEYWORDS",
    "AI market research,synthetic audiences,persona validation,focus group automation",
).split(",")

COMPANY_CONTEXT = {
    "name": "Acme",
    "category": "AI-powered market research platform",
    "competitors": ["Pollfish", "UserTesting", "Wynter", "SurveyMonkey", "Qualtrics"],
    "key_markets": ["B2B SaaS", "Marketing agencies", "Enterprise brand teams"],
    "strategic_priorities": [
        "Expand enterprise segment",
        "Launch self-serve pricing tier",
        "Build integration ecosystem",
    ],
}

Stage 1 — Fetch News

Query the News API for recent stories matching your keywords.
def fetch_news(lookback_hours: int = 24, max_results: int = 20) -> list[dict]:
    """Fetch recent news matching industry keywords."""
    all_stories = []

    for keyword in NEWS_KEYWORDS:
        resp = requests.post(
            f"{BASE}/news/search",
            headers=HEADERS,
            json={
                "query": keyword.strip(),
                "workspace_id": WORKSPACE_ID,
                "max_results": max_results,
            },
        ).json()

        if "error" in resp:
            print(f"Warning: News search failed for '{keyword}': {resp['error']['message']}")
            continue

        stories = resp.get("results", [])
        for story in stories:
            story["_search_keyword"] = keyword.strip()
        all_stories.extend(stories)

    # Deduplicate by URL or title
    seen = set()
    unique_stories = []
    for story in all_stories:
        key = story.get("url", story.get("title", ""))
        if key not in seen:
            seen.add(key)
            unique_stories.append(story)

    print(f"✓ Fetched {len(unique_stories)} unique stories from {len(NEWS_KEYWORDS)} keywords")
    return unique_stories


def fetch_latest_news(max_results: int = 20) -> list[dict]:
    """Fetch the latest news feed without keyword filtering."""
    resp = requests.get(
        f"{BASE}/news",
        headers=HEADERS,
        params={"workspace_id": WORKSPACE_ID, "limit": max_results},
    ).json()

    if "error" in resp:
        raise Exception(resp["error"]["message"])

    stories = resp.get("results", resp.get("data", []))
    print(f"✓ Fetched {len(stories)} stories from news feed")
    return stories

Stage 2 — Score Significance

Use Chat with structured output to score each story’s relevance to your business.
SIGNIFICANCE_SCHEMA = {"type": "json_schema", "json_schema": {
    "name": "significance_score", "strict": True,
    "schema": {
        "type": "object",
        "properties": {
            "score": {"type": "number", "description": "Significance 1-10"},
            "category": {
                "type": "string",
                "description": "competitor_action, regulatory, market_shift, technology, tangential",
            },
            "reasoning": {"type": "string", "description": "Why this score"},
            "urgency": {"type": "string", "description": "immediate, this_week, this_month, informational"},
            "affected_priorities": {
                "type": "array",
                "items": {"type": "string"},
                "description": "Which strategic priorities are affected",
            },
        },
        "required": ["score", "category", "reasoning", "urgency", "affected_priorities"],
    },
}}


def score_significance(story: dict) -> dict:
    """Score a news story's significance to our business."""
    prompt = (
        f"You are a strategic analyst for {COMPANY_CONTEXT['name']} "
        f"({COMPANY_CONTEXT['category']}).\n\n"
        f"Our competitors: {', '.join(COMPANY_CONTEXT['competitors'])}\n"
        f"Our key markets: {', '.join(COMPANY_CONTEXT['key_markets'])}\n"
        f"Our strategic priorities:\n"
    )
    for p in COMPANY_CONTEXT["strategic_priorities"]:
        prompt += f"  - {p}\n"

    prompt += (
        f"\nRate the significance of this news story to our business (1-10).\n\n"
        f"**Title:** {story.get('title', 'No title')}\n"
        f"**Source:** {story.get('source', 'Unknown')}\n"
        f"**Published:** {story.get('published_at', 'Unknown')}\n"
        f"**Summary:** {story.get('description', story.get('summary', 'No summary'))}\n"
    )

    resp = mavera.responses.create(
        model="mavera-1",
        input=[{"role": "user", "content": prompt}],
        extra_body={"response_format": SIGNIFICANCE_SCHEMA},
    )

    result = json.loads(resp.output[0].content[0].text)
    result["story"] = story
    return result


def batch_score_stories(stories: list[dict]) -> list[dict]:
    """Score all stories and sort by significance."""
    scored = []

    for i, story in enumerate(stories):
        result = score_significance(story)
        scored.append(result)

        status = "TRIGGER" if result["score"] >= SIGNIFICANCE_THRESHOLD else "skip"
        print(f"  [{status}] {result['score']}/10 — {story.get('title', 'No title')[:60]}")

        time.sleep(1)

    scored.sort(key=lambda x: x["score"], reverse=True)
    triggered = [s for s in scored if s["score"] >= SIGNIFICANCE_THRESHOLD]
    print(f"\n✓ Scored {len(stories)} stories. {len(triggered)} above threshold ({SIGNIFICANCE_THRESHOLD}).")
    return scored
Set your threshold based on volume. If you’re getting 50+ stories/day, use 8+. For niche markets with fewer stories, 6+ catches more relevant signals.

Stage 3 — Mave Research on Triggered Stories

For each high-significance story, launch a 3-turn Mave research thread.
RESEARCH_TURNS = [
    "What are the immediate implications of this event for our market? Who wins, who loses?",
    "What specific opportunities does this create for us? Be concrete — product features, positioning angles, partnerships, or market segments we could target.",
    "What threats or risks does this pose? What should we watch for in the next 30/60/90 days? What defensive moves should we consider?",
]


def research_story(scored_story: dict) -> dict:
    """Run a 3-turn Mave research thread on a triggered story."""
    story = scored_story["story"]
    initial_prompt = (
        f"A significant event just occurred in our market. Analyze its strategic implications.\n\n"
        f"**Our company:** {COMPANY_CONTEXT['name']} ({COMPANY_CONTEXT['category']})\n"
        f"**Our competitors:** {', '.join(COMPANY_CONTEXT['competitors'])}\n"
        f"**Our strategic priorities:** {', '.join(COMPANY_CONTEXT['strategic_priorities'])}\n\n"
        f"**News event:**\n"
        f"Title: {story.get('title', 'N/A')}\n"
        f"Source: {story.get('source', 'N/A')}\n"
        f"Published: {story.get('published_at', 'N/A')}\n"
        f"Summary: {story.get('description', story.get('summary', 'N/A'))}\n\n"
        f"Significance score: {scored_story['score']}/10 ({scored_story['category']})\n\n"
        f"{RESEARCH_TURNS[0]}"
    )

    thread_id = None
    research_results = []

    # Turn 1: Initial analysis
    resp = requests.post(
        f"{BASE}/mave/chat",
        headers=HEADERS,
        json={"message": initial_prompt},
        timeout=120,
    ).json()

    if "error" in resp:
        raise Exception(resp["error"]["message"])

    thread_id = resp.get("thread_id")
    research_results.append({
        "turn": "implications",
        "content": resp.get("content", ""),
        "sources": resp.get("sources", []),
    })
    print(f"  ✓ Turn 1: Implications ({len(resp.get('content', ''))} chars)")

    # Turns 2-3: Follow-ups
    for i, turn_prompt in enumerate(RESEARCH_TURNS[1:], start=2):
        time.sleep(2)
        resp = requests.post(
            f"{BASE}/mave/chat",
            headers=HEADERS,
            json={"thread_id": thread_id, "message": turn_prompt},
            timeout=120,
        ).json()

        if "error" in resp:
            raise Exception(resp["error"]["message"])

        turn_label = "opportunities" if i == 2 else "threats"
        research_results.append({
            "turn": turn_label,
            "content": resp.get("content", ""),
            "sources": resp.get("sources", []),
        })
        print(f"  ✓ Turn {i}: {turn_label.title()} ({len(resp.get('content', ''))} chars)")

    return {
        "thread_id": thread_id,
        "story": story,
        "significance": scored_story,
        "research": research_results,
    }

Stage 4 — Generate Intelligence Brief

INTEL_BRIEF_SCHEMA = {"type": "json_schema", "json_schema": {
    "name": "intelligence_brief", "strict": True,
    "schema": {
        "type": "object",
        "properties": {
            "headline": {"type": "string"},
            "event_summary": {"type": "string"},
            "significance_score": {"type": "number"},
            "category": {"type": "string"},
            "urgency": {"type": "string"},
            "market_impact": {"type": "string"},
            "opportunities": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "opportunity": {"type": "string"},
                        "time_sensitivity": {"type": "string"},
                        "effort": {"type": "string"},
                    },
                    "required": ["opportunity", "time_sensitivity", "effort"],
                },
            },
            "threats": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "threat": {"type": "string"},
                        "likelihood": {"type": "string"},
                        "mitigation": {"type": "string"},
                    },
                    "required": ["threat", "likelihood", "mitigation"],
                },
            },
            "recommended_actions": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "action": {"type": "string"},
                        "owner": {"type": "string"},
                        "deadline": {"type": "string"},
                    },
                    "required": ["action", "owner", "deadline"],
                },
            },
            "sources": {"type": "array", "items": {"type": "string"}},
        },
        "required": [
            "headline", "event_summary", "significance_score", "category",
            "urgency", "market_impact", "opportunities", "threats",
            "recommended_actions", "sources",
        ],
    },
}}


def generate_intel_brief(research_result: dict) -> dict:
    """Structure the research into a standardized intelligence brief."""
    combined_research = "\n\n".join(
        f"### {r['turn'].title()}\n{r['content']}"
        for r in research_result["research"]
    )

    all_sources = []
    for r in research_result["research"]:
        for source in r.get("sources", []):
            url = source.get("url", source) if isinstance(source, dict) else source
            if url not in all_sources:
                all_sources.append(url)

    prompt = (
        "Synthesize this research into a concise strategic intelligence brief.\n\n"
        f"## Original News Event\n"
        f"Title: {research_result['story'].get('title', 'N/A')}\n"
        f"Source: {research_result['story'].get('source', 'N/A')}\n\n"
        f"## Mave Research\n{combined_research}\n\n"
        f"## Sources\n{json.dumps(all_sources[:10])}\n\n"
        "Format as an intelligence brief with specific, actionable recommendations. "
        "Assign owners (Product, Marketing, Sales, Leadership) and deadlines."
    )

    resp = mavera.responses.create(
        model="mavera-1",
        input=[{"role": "user", "content": prompt}],
        extra_body={"response_format": INTEL_BRIEF_SCHEMA},
    )

    return json.loads(resp.output[0].content[0].text)

Running the Full Pipeline

def run_news_triggered_research():
    print("=" * 60)
    print("NEWS-TRIGGERED RESEARCH")
    print(f"Threshold: {SIGNIFICANCE_THRESHOLD}/10")
    print(f"Keywords: {', '.join(NEWS_KEYWORDS)}")
    print("=" * 60)

    # Stage 1: Fetch news
    print("\n--- Stage 1: Fetching News ---")
    stories = fetch_news(lookback_hours=24)

    if not stories:
        print("No stories found. Try broader keywords or a longer lookback.")
        return []

    # Stage 2: Score significance
    print("\n--- Stage 2: Scoring Significance ---")
    scored = batch_score_stories(stories)

    # Stage 3: Research triggered stories
    triggered = [s for s in scored if s["score"] >= SIGNIFICANCE_THRESHOLD]

    if not triggered:
        print("\nNo stories above threshold. Lowering threshold or broadening keywords may help.")
        # Save all scored stories for review
        with open("news_scored.json", "w") as f:
            json.dump([{
                "title": s["story"].get("title"),
                "score": s["score"],
                "category": s["category"],
                "reasoning": s["reasoning"],
            } for s in scored], f, indent=2)
        return []

    print(f"\n--- Stage 3: Researching {len(triggered)} Triggered Stories ---")
    briefs = []
    for i, scored_story in enumerate(triggered, 1):
        title = scored_story["story"].get("title", "Unknown")
        print(f"\n[{i}/{len(triggered)}] Researching: {title[:60]}...")

        research = research_story(scored_story)
        brief = generate_intel_brief(research)
        briefs.append(brief)

        print(f"  ✓ Brief generated: {brief['headline']}")
        print(f"  Urgency: {brief['urgency']}")
        print(f"  Opportunities: {len(brief['opportunities'])}")
        print(f"  Threats: {len(brief['threats'])}")

    # Save all briefs
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"intel_briefs_{timestamp}.json"
    with open(filename, "w") as f:
        json.dump(briefs, f, indent=2)

    print(f"\n✓ Saved {len(briefs)} intelligence briefs to {filename}")

    # Print summary
    print(f"\n{'='*60}")
    print("INTELLIGENCE SUMMARY")
    print(f"{'='*60}")
    for brief in briefs:
        print(f"\n📰 {brief['headline']}")
        print(f"   Significance: {brief['significance_score']}/10 | Urgency: {brief['urgency']}")
        if brief["recommended_actions"]:
            print(f"   Top action: {brief['recommended_actions'][0]['action']}")

    return briefs


if __name__ == "__main__":
    run_news_triggered_research()

Example Output

{
  "headline": "UserTesting acquires Wynter — consolidation creates opportunity for differentiated positioning",
  "event_summary": "UserTesting announced the acquisition of Wynter, a B2B message testing platform, for an undisclosed sum. The deal combines UserTesting's panel-based testing with Wynter's B2B audience targeting.",
  "significance_score": 9,
  "category": "competitor_action",
  "urgency": "this_week",
  "market_impact": "Market consolidation reduces the number of independent competitors. The combined entity will have stronger B2B reach but may face integration challenges. Customers unhappy with the merger may look for alternatives.",
  "opportunities": [
    {
      "opportunity": "Target Wynter customers who dislike being absorbed into a larger platform — offer migration incentives",
      "time_sensitivity": "2 weeks",
      "effort": "Low"
    },
    {
      "opportunity": "Position as the AI-native alternative to legacy panel-based research — differentiate on speed and cost",
      "time_sensitivity": "1 month",
      "effort": "Medium"
    }
  ],
  "threats": [
    {
      "threat": "Combined UserTesting+Wynter could build synthetic audience features, closing our differentiation gap",
      "likelihood": "Medium (6-12 months)",
      "mitigation": "Accelerate feature development in focus groups and persona depth"
    }
  ],
  "recommended_actions": [
    {
      "action": "Launch a 'switch from Wynter' landing page and email campaign targeting known Wynter users",
      "owner": "Marketing",
      "deadline": "This week"
    },
    {
      "action": "Write a thought leadership piece on 'Why AI-native research beats panel consolidation'",
      "owner": "Content",
      "deadline": "2 weeks"
    },
    {
      "action": "Brief sales team on competitive talking points against the combined entity",
      "owner": "Sales",
      "deadline": "3 days"
    }
  ],
  "sources": [
    "https://example.com/usertesting-wynter-acquisition",
    "https://example.com/market-research-industry-consolidation"
  ]
}

Variations

Run the pipeline on a schedule (e.g., every 6 hours) using cron or a scheduler:
# crontab: 0 */6 * * * python news_monitor.py
# Or use APScheduler for in-process scheduling:
from apscheduler.schedulers.blocking import BlockingScheduler

scheduler = BlockingScheduler()
scheduler.add_job(run_news_triggered_research, "interval", hours=6)
scheduler.start()
Post high-urgency briefs to Slack after generation:
import requests as http_requests

def notify_slack(brief: dict, webhook_url: str):
    text = (
        f"*{brief['headline']}*\n"
        f"Significance: {brief['significance_score']}/10 | Urgency: {brief['urgency']}\n"
        f"Top action: {brief['recommended_actions'][0]['action']}"
    )
    http_requests.post(webhook_url, json={"text": text})
Create a dedicated keyword list per competitor for targeted tracking:
COMPETITOR_KEYWORDS = {
    "Pollfish": ["Pollfish funding", "Pollfish acquisition", "Pollfish launch"],
    "UserTesting": ["UserTesting IPO", "UserTesting acquisition", "UserTesting product"],
    "Wynter": ["Wynter B2B", "Wynter messaging", "Wynter funding"],
}
After researching a significant event, run a Focus Group to test how your customers would react:
# After generating intel brief
fg_payload = {
    "name": f"Impact Validation: {brief['headline'][:50]}",
    "sample_size": 25,
    "persona_ids": customer_persona_ids,
    "questions": [
        {"question": f"How does this event affect your evaluation of {COMPANY_CONTEXT['name']}?", "type": "LIKERT", "scale": 10, "order": 1},
        {"question": "What concerns does this raise for you?", "type": "OPEN_ENDED", "order": 2},
    ],
}
Store all briefs and periodically analyze trends:
import glob

def load_archive():
    all_briefs = []
    for f in glob.glob("intel_briefs_*.json"):
        all_briefs.extend(json.load(open(f)))
    return all_briefs

archive = load_archive()
categories = {}
for brief in archive:
    cat = brief["category"]
    categories[cat] = categories.get(cat, 0) + 1
print("Category distribution:", categories)

Credits Estimate

StageTypical CostNotes
News search (per keyword)5–15 creditsDepends on news volume
Significance scoring (per story)1–3 creditsOne chat call per story
Mave research (3 turns per story)30–90 creditsTriggered stories only
Intelligence brief (per story)5–15 creditsOne structured output
Total (20 stories scored, 2 triggered)~100–200 credits
Total (20 stories scored, 5 triggered)~200–500 credits
Credit cost scales with triggered stories, not total stories monitored. A high significance threshold (8+) keeps costs low while catching only truly impactful events. Lower the threshold during periods of high market activity.

See Also

News Intelligence

News API endpoints and search capabilities

Mave Agent

Research agent with threads and sources

Market Entry Research

Use Mave for comprehensive market research

Brand Perception Audit

Monitor how events shift brand perception

Annual Planning Kickoff

Feed intelligence into annual planning

Credits & Budget

Manage monitoring costs