Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Scenario

Your products accumulate reviews through Shopify metafields or a third-party app like Judge.me. You pull review text for each product, batch them into Mavera responses with a structured-output persona, and receive a sentiment breakdown with recurring themes (quality, shipping, sizing, value).

Architecture

Code

import os, json, requests
from openai import OpenAI

STORE = os.environ["SHOPIFY_STORE"]
TOKEN = os.environ["SHOPIFY_ACCESS_TOKEN"]
MV = os.environ["MAVERA_API_KEY"]
SH_B = f"https://{STORE}.myshopify.com/admin/api/2024-10"
SH_H = {"X-Shopify-Access-Token": TOKEN}
client = OpenAI(api_key=MV, base_url="https://app.mavera.io/api/v1")

def fetch_reviews(product_id):
    resp = requests.get(f"{SH_B}/products/{product_id}/metafields.json?namespace=reviews", headers=SH_H)
    resp.raise_for_status()
    for mf in resp.json().get("metafields", []):
        if mf["key"] == "review_list":
            try: return json.loads(mf["value"])
            except (json.JSONDecodeError, TypeError): pass
    jm = requests.get(f"https://judge.me/api/v1/reviews?shop_domain={STORE}.myshopify.com&external_id={product_id}")
    return [{"body": r["body"], "rating": r["rating"]} for r in jm.json().get("reviews", [])] if jm.ok else []

def analyze(title, reviews):
    text = "\n".join(f"- [{r.get('rating','?')}/5] {r.get('body','')}" for r in reviews[:40])
    c = client.responses.create(model="mavera-default", input=[
        {"role": "system", "content": "You are a product review analyst. Return JSON: sentiment_breakdown (positive/neutral/negative counts), top_themes [{theme, count, sample_quote}], overall_score (1-10), summary (2 sentences)."},
        {"role": "user", "content": f"Product: {title}\n\nReviews:\n{text}"},
    ], response_format={"type": "json_object"})
    return json.loads(c.output[0].content[0].text)

products = requests.get(f"{SH_B}/products.json?limit=10&status=active", headers=SH_H).json()["products"]
for p in products:
    reviews = fetch_reviews(p["id"])
    if not reviews: continue
    r = analyze(p["title"], reviews)
    print(f"{p['title']}: score={r['overall_score']}/10, themes={[t['theme'] for t in r['top_themes']]}")

Example Output

{
  "sentiment_breakdown": { "positive": 28, "neutral": 7, "negative": 5 },
  "top_themes": [
    { "theme": "quality", "count": 18, "sample_quote": "The stitching is incredibly durable" },
    { "theme": "sizing", "count": 12, "sample_quote": "Runs a full size small, order up" }
  ],
  "overall_score": 8,
  "summary": "Customers praise material quality and fast shipping. Sizing runs small, driving most negative reviews."
}

Error Handling

If your store doesn’t use the reviews namespace, the API returns an empty array — not an error. The code falls back to Judge.me automatically. For Loox or Yotpo, replace the fallback URL with their respective API endpoint.
When batching many reviews, the prompt may exceed the context window. The code caps at 40 reviews. For very long reviews, truncate each body to 200 characters. If still too large, split into batches of 20 and merge outputs.