Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Your team takes meeting notes in Notion — standups, strategy sessions, customer calls, board meetings. After weeks of meetings, patterns hide in plain sight: recurring decisions never executed, action items forgotten, themes nobody synthesized. This job pulls meeting pages from a database, extracts their block content, and sends the aggregate to Mave Agent for cross-meeting analysis. Flow: Notion POST /databases/{id}/query (meeting notes DB) → GET /blocks/{page_id}/children per page → Aggregate text → Mavera POST /api/v1/mave/chat → Action items, decisions, themes

Code

import os, requests, time

NOTION = os.environ["NOTION_API_KEY"]
MV = os.environ["MAVERA_API_KEY"]
NB = "https://api.notion.com/v1"
MB = "https://app.mavera.io/api/v1"
NH = {
    "Authorization": f"Bearer {NOTION}",
    "Notion-Version": "2022-06-28",
    "Content-Type": "application/json",
}
MH = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

MEETINGS_DB_ID = "your-meetings-database-id"

# 1. Query recent meeting pages
pages = requests.post(f"{NB}/databases/{MEETINGS_DB_ID}/query", headers=NH, json={
    "filter": {
        "property": "Date",
        "date": {"past_month": {}},
    },
    "sorts": [{"property": "Date", "direction": "descending"}],
    "page_size": 30,
}).json().get("results", [])

print(f"Found {len(pages)} meeting pages in the past month")

# 2. Extract text from each page's blocks
def extract_text(block):
    """Recursively extract plain text from a Notion block."""
    btype = block.get("type", "")
    content = block.get(btype, {})
    rich_text = content.get("rich_text", [])
    return "".join(t.get("plain_text", "") for t in rich_text)

def get_page_text(page_id, depth=0):
    """Fetch all blocks from a page and concatenate text."""
    texts = []
    cursor = None
    while True:
        params = {"page_size": 100}
        if cursor:
            params["start_cursor"] = cursor
        r = requests.get(f"{NB}/blocks/{page_id}/children", headers=NH, params=params)
        if r.status_code == 429:
            time.sleep(1)
            continue
        r.raise_for_status()
        data = r.json()
        for block in data.get("results", []):
            text = extract_text(block)
            if text.strip():
                prefix = "- " if block.get("type") in ("bulleted_list_item", "numbered_list_item", "to_do") else ""
                texts.append(f"{prefix}{text}")
            if block.get("has_children") and depth < 2:
                texts.extend(get_page_text(block["id"], depth + 1))
        cursor = data.get("next_cursor")
        if not cursor:
            break
        time.sleep(0.4)
    return texts

meetings_corpus = []
for page in pages[:20]:
    props = page.get("properties", {})
    title_parts = props.get("Name", props.get("Title", {})).get("title", [])
    title = "".join(t.get("plain_text", "") for t in title_parts) or "Untitled Meeting"
    date = (props.get("Date", {}).get("date", {}) or {}).get("start", "unknown")

    text_lines = get_page_text(page["id"])
    if text_lines:
        meetings_corpus.append(f"## {title} ({date})\n" + "\n".join(text_lines[:50]))
    time.sleep(0.4)

corpus = "\n\n---\n\n".join(meetings_corpus)
print(f"Extracted text from {len(meetings_corpus)} meetings ({len(corpus)} chars)")

# 3. Mave analysis
analysis = requests.post(f"{MB}/mave/chat", headers=MH, json={
    "message": (
        f"Meeting intelligence analyst. Analyze {len(meetings_corpus)} meetings from the past month.\n\n"
        f"MEETING NOTES:\n{corpus[:12000]}\n\n"
        "Extract and structure:\n\n"
        "1. **ACTION ITEMS** — Who owes what, by when. Flag overdue or repeated items.\n"
        "2. **DECISIONS MADE** — Key decisions with date and context.\n"
        "3. **RECURRING THEMES** — Topics that appear across 3+ meetings.\n"
        "4. **BLOCKERS & RISKS** — Issues mentioned but unresolved.\n"
        "5. **STRATEGIC PATTERNS** — Higher-level trends across all meetings.\n"
        "6. **FOLLOW-UP RECOMMENDATIONS** — What should happen next based on the patterns.\n\n"
        "Quote directly from the notes. Include meeting titles for attribution."
    ),
}).json()

print(f"\n{'='*60}\nMEETING INTELLIGENCE REPORT\n{'='*60}")
print(analysis.get("content", "")[:3000])

Example Output

Found 24 meeting pages in the past month
Extracted text from 20 meetings (18432 chars)

MEETING INTELLIGENCE REPORT
============================================================

## 1. ACTION ITEMS
- @Sarah: Finalize Q3 messaging framework (due Mar 7, mentioned in
  "Marketing Sync 3/3" and "Brand Review 3/5" — appears overdue)
- @Dev Team: Ship API v2 pagination (from "Sprint Planning 3/1",
  blocked by schema decision — see Blockers)
- @Mike: Schedule customer advisory board (mentioned 3 times,
  no progress noted)

## 2. DECISIONS MADE
- [3/5] Brand Review: Approved new tagline "Ship faster, learn faster"
- [3/3] Marketing Sync: Moved product launch from April 1 → April 15
- [3/1] Sprint Planning: Prioritized mobile onboarding over desktop

## 3. RECURRING THEMES (4+ meetings)
- "Customer onboarding friction" (7/20 meetings)
- "Enterprise pricing model" (5/20 meetings)
- "Competitor X free tier response" (4/20 meetings)

## 4. BLOCKERS
- API schema decision blocking 3 downstream features
- Legal review of new Terms of Service (2 weeks, no owner assigned)

## 5. FOLLOW-UP RECOMMENDATIONS
- Assign owner to ToS legal review immediately
- Escalate API schema decision to CTO — blocking velocity
- Consolidate "enterprise pricing" discussions into single meeting

Error Handling

Notion supports nested content (toggles, callouts with children). The code recurses up to depth 2. For deeply nested pages, increase the depth limit but watch rate limits.
Pages with 200+ blocks require pagination via start_cursor. The code handles this automatically. Very large pages may hit the 12K character corpus limit — increase for thorough analysis.
The past_month filter requires the property to be a date type. If your database uses created_time instead, change the filter to {"timestamp": "created_time", "created_time": {"past_month": {}}}.
Each meeting page must be shared with the integration. If pages are in a shared workspace, sharing the parent database shares all child pages automatically.