Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Your marketing team runs campaigns as Asana projects — each task is a content piece with custom fields for Content Type, Channel, Priority, and Due Date. You need a weekly status report that goes beyond “X tasks completed”: you want insight into velocity trends, bottleneck stages, and content mix analysis. This job pulls all tasks from a campaign project, analyzes the pipeline state, and sends it to Mave for a strategic status report. Flow: Asana GET /projects/{id}/tasks → Aggregate by status/custom fields → Mavera POST /api/v1/mave/chat → Campaign status report

Code

import os, requests, time
from collections import Counter
from datetime import datetime

ASANA = os.environ["ASANA_PAT"]
MV = os.environ["MAVERA_API_KEY"]
AB = "https://app.asana.com/api/1.0"
MB = "https://app.mavera.io/api/v1"
AH = {"Authorization": f"Bearer {ASANA}"}
MH = {"Authorization": f"Bearer {MV}", "Content-Type": "application/json"}

PROJECT_GID = "1234567890123456"

# 1. Fetch all tasks with custom fields
tasks = []
offset = None
while True:
    params = {
        "project": PROJECT_GID,
        "opt_fields": "name,completed,completed_at,due_on,assignee.name,"
                      "custom_fields.name,custom_fields.display_value,"
                      "memberships.section.name,tags.name",
        "limit": 100,
    }
    if offset:
        params["offset"] = offset
    r = requests.get(f"{AB}/tasks", headers=AH, params=params)
    if r.status_code == 429:
        retry = int(r.headers.get("Retry-After", 30))
        time.sleep(retry)
        continue
    r.raise_for_status()
    data = r.json()
    tasks.extend(data.get("data", []))
    offset = data.get("next_page", {})
    offset = offset.get("offset") if offset else None
    if not offset:
        break

print(f"Fetched {len(tasks)} tasks from campaign project")

# 2. Analyze task pipeline
completed = [t for t in tasks if t.get("completed")]
active = [t for t in tasks if not t.get("completed")]
overdue = [t for t in active if t.get("due_on") and t["due_on"] < datetime.now().strftime("%Y-%m-%d")]

sections = Counter()
content_types = Counter()
for t in tasks:
    for mem in t.get("memberships", []):
        sec = mem.get("section", {}).get("name", "No Section")
        sections[sec] += 1
    for cf in t.get("custom_fields", []):
        if cf.get("name") == "Content Type" and cf.get("display_value"):
            content_types[cf["display_value"]] += 1

pipeline = "\n".join(f"  {sec}: {n} tasks" for sec, n in sections.most_common())
mix = "\n".join(f"  {ct}: {n}" for ct, n in content_types.most_common())

# 3. Build task details
task_details = []
for t in tasks[:50]:
    assignee = t.get("assignee", {})
    section = next((m.get("section", {}).get("name", "") for m in t.get("memberships", [])), "")
    custom = {cf["name"]: cf.get("display_value", "") for cf in t.get("custom_fields", []) if cf.get("display_value")}
    status = "Done" if t.get("completed") else ("OVERDUE" if t.get("due_on") and t["due_on"] < datetime.now().strftime("%Y-%m-%d") else "Active")
    task_details.append(
        f"- [{status}] {t['name']} | Section: {section} | "
        f"Assignee: {assignee.get('name', 'Unassigned')} | Due: {t.get('due_on', 'None')} | "
        f"{', '.join(f'{k}: {v}' for k, v in custom.items())}"
    )

# 4. Mave status report
report = requests.post(f"{MB}/mave/chat", headers=MH, json={
    "message": (
        f"Campaign status analyst. Produce a strategic status report for this campaign project.\n\n"
        f"SUMMARY: {len(tasks)} total | {len(completed)} done | {len(active)} active | {len(overdue)} overdue\n\n"
        f"PIPELINE BY SECTION:\n{pipeline}\n\n"
        f"CONTENT MIX:\n{mix}\n\n"
        f"TASK DETAILS:\n" + "\n".join(task_details[:40]) + "\n\n"
        "Generate:\n"
        "1. **Executive Summary** (3 sentences — health, velocity, risk)\n"
        "2. **Velocity Analysis** — Are we on pace? Compare completed vs remaining vs time left\n"
        "3. **Bottleneck Detection** — Which sections/stages have the most stuck tasks?\n"
        "4. **Content Mix Assessment** — Is the mix aligned with campaign goals?\n"
        "5. **Overdue Task Triage** — Each overdue task with recommended action\n"
        "6. **Assignee Load Balancing** — Who's overloaded? Who has capacity?\n"
        "7. **Recommendations** — 3-5 specific actions for this week"
    ),
}).json()

print(f"\n{'='*60}\nCAMPAIGN STATUS REPORT\n{'='*60}")
print(report.get("content", "")[:3000])

Example Output

Fetched 47 tasks from campaign project

CAMPAIGN STATUS REPORT
============================================================

## Executive Summary
Campaign is 63% complete (30/47 tasks) with 5 overdue items
concentrated in the Design Review stage. Velocity has slowed —
only 4 tasks completed this week vs 8 last week. Three blog
posts are at risk of missing the April 1 launch deadline.

## Velocity Analysis
- Weeks 1-2: 16 tasks completed (8/week)
- Weeks 3-4: 14 tasks completed (7/week)
- Current week: 4 completed, 17 remaining, 2 weeks left
- Required pace: 8.5/week — achievable if bottleneck clears

## Bottleneck Detection
- "Design Review" section: 8 tasks queued (3 overdue)
- Only 1 designer assigned to all 8 → capacity constraint
- "Copy Approval" has 4 tasks waiting on VP sign-off

## Recommendations
1. Reassign 3 Design Review tasks to contractor
2. Schedule 15-min VP approval session for queued copy
3. Deprioritize 2 low-priority social posts to free capacity
4. Move launch date for case study to April 8 (dependency chain)

Error Handling

Free plans allow ~150 req/min. The code reads the Retry-After header and waits. For large projects, use opt_fields to minimize response size and reduce call count.
Custom fields only appear if you include them in opt_fields. If a custom field name doesn’t match, check the exact name via GET /projects/{gid}/custom_field_settings.
Projects with 100+ tasks require pagination. The code follows next_page.offset automatically. Very large projects (1000+) should filter by modified_since to reduce data.
Tasks can belong to multiple sections (if multi-homed). The code uses the first membership. For accurate pipeline views, filter by the primary project.