Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Perigon tags articles with structured entities — companies, people, topics, and locations. This job filters articles by competitor company entities, enriches them with related person entities (executive mentions), and feeds the structured data to Mave for deep competitive analysis that goes beyond keyword matching. Flow: Perigon GET /all?companyName={competitor} → Extract entities → Mavera POST /mave/chat → Entity-level intelligence report

Code

import os, requests, time

PG_KEY = os.environ["PERIGON_API_KEY"]
PG_BASE = "https://api.goperigon.com/v1"
MV = os.environ["MAVERA_API_KEY"]
MV_BASE = "https://app.mavera.io/api/v1"
MV_H = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

COMPETITORS = ["Salesforce", "HubSpot", "Adobe"]

# 1. Fetch articles per competitor entity
all_intel = {}
for comp in COMPETITORS:
    r = requests.get(f"{PG_BASE}/all", params={
        "apiKey": PG_KEY, "companyName": comp, "sortBy": "date",
        "size": 20, "sourceGroup": "top100",
        "from": (requests.utils.default_headers(), None)[1],
    })
    if not r.ok:
        print(f"{comp}: API error {r.status_code}")
        continue
    articles = r.json().get("articles", [])

    entity_data = []
    for a in articles:
        companies = [c.get("name","") for c in a.get("companies", [])]
        people = [p.get("name","") for p in a.get("people", [])]
        topics = [t.get("name","") for t in a.get("topics", [])]
        entity_data.append({
            "title": a.get("title",""),
            "source": a.get("source",{}).get("name",""),
            "date": a.get("pubDate","")[:10],
            "summary": a.get("summary", a.get("description",""))[:300],
            "companies": companies,
            "people": people,
            "topics": topics,
            "sentiment": a.get("sentiment",""),
        })
    all_intel[comp] = entity_data
    print(f"{comp}: {len(entity_data)} articles, {sum(len(e['people']) for e in entity_data)} person mentions")
    time.sleep(1)

# 2. Build structured corpus
corpus_parts = []
for comp, articles in all_intel.items():
    corpus_parts.append(f"\n## {comp} ({len(articles)} articles)")
    for a in articles:
        corpus_parts.append(
            f"- [{a['source']}] {a['title']} ({a['date']})\n"
            f"  Summary: {a['summary'][:200]}\n"
            f"  Entities: Companies={a['companies'][:5]}, People={a['people'][:3]}, Topics={a['topics'][:3]}\n"
            f"  Sentiment: {a['sentiment']}"
        )

# 3. Mave analysis
analysis = requests.post(f"{MV_BASE}/mave/chat", headers=MV_H, json={
    "message": f"Competitive intelligence analyst. Analyze entity-tagged news for these competitors.\n\n"
        + "\n".join(corpus_parts[:80])
        + "\n\nFor EACH competitor:\n"
        "1. **Executive Moves** — Who's being mentioned and why (hiring, departures, keynotes)\n"
        "2. **Product Signals** — What they're building or acquiring\n"
        "3. **Partnership Map** — Which companies appear alongside them\n"
        "4. **Topic Clusters** — What themes dominate their coverage\n"
        "5. **Sentiment Trajectory** — Getting better or worse coverage?\n"
        "6. **Strategic Implication** — What this means for our positioning\n\n"
        "End with a THREAT/OPPORTUNITY matrix."
}).json()

for comp in COMPETITORS:
    cnt = len(all_intel.get(comp, []))
    print(f"{comp}: {cnt} articles analyzed")
print(f"\n{'='*60}\nENTITY INTELLIGENCE REPORT\n{'='*60}")
print(analysis.get("content", "")[:3000])

Example Output

Salesforce: 20 articles, 14 person mentions
HubSpot: 18 articles, 9 person mentions
Adobe: 15 articles, 11 person mentions

ENTITY INTELLIGENCE REPORT
============================================================

## Salesforce
**Executive Moves:** Marc Benioff keynote at AI conference — positioning as
"AI-first CRM." New VP of AI hired from Google DeepMind.
**Product Signals:** Einstein Copilot expansion. Acquiring data pipeline startup.
**Partnership Map:** AWS (deepening), Snowflake (new integration), Anthropic.
**Sentiment:** Positive (65%) — Wall Street bullish on AI pivot.
**Implication:** Their AI narrative is credible. We need differentiated AI story.

THREAT/OPPORTUNITY MATRIX:
| Competitor | Threat | Opportunity |
|------------|--------|-------------|
| Salesforce | 8/10 — AI credibility | Mid-market gap as they go enterprise |
| HubSpot    | 5/10 — SMB loyalty   | They're slow on AI features |
| Adobe      | 6/10 — Creative suite | Marketing ops underserved |

Error Handling

Perigon uses NER — companyName=Apple may include Apple Records. Add topic or category filters to narrow results.
Not all articles have entity tags. Filter with len(companies) > 0 to ensure structured data is available.
Each competitor = 1 API call. 3 competitors = 3 calls. Safe on Starter plans for daily runs.