Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

Function calling lets the model use your code. You describe functions (tools), the model decides when to call them and with what arguments, and you run them locally. Then you send the result back, and the model weaves it into a natural-language answer. This is how you connect Mavera to databases, APIs, file systems — anything your code can reach.
Mavera uses the OpenAI-compatible tools interface. If you’ve used function calling with OpenAI, you already know the pattern. Same SDKs, same flow.

How It Works

Every function-calling interaction follows a five-step cycle. The model never executes code itself — it tells you what to call and with what arguments, and you do the rest. The cycle can repeat — the model may call additional functions based on earlier results.

Quick Example

Let’s build a complete get_weather flow. You define the function, the model calls it, and you return the result.
1

Install the OpenAI SDK

pip install openai
2

Define your tool and make the first call

import json
from openai import OpenAI

client = OpenAI(
    api_key="mvra_live_your_key_here",
    base_url="https://app.mavera.io/api/v1",
)

tools = [{
    "type": "function",
    "name": "get_weather",
    "description": "Get the current weather for a given location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {"type": "string", "description": "City and state, e.g. San Francisco, CA"},
            "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
        },
        "required": ["location"],
    },
}]

input_messages = [{"role": "user", "content": "What's the weather like in Paris right now?"}]

response = client.responses.create(model="mavera-1", input=input_messages, tools=tools)
3

Check for function calls and execute locally

The model doesn’t return text — it returns items with type: "function_call" in the output. Parse the arguments and run your function.
function_calls = [item for item in response.output if item.type == "function_call"]

if function_calls:
    tool_call = function_calls[0]
    args = json.loads(tool_call.arguments)

    def get_weather(location, unit="celsius"):
        return {"temperature": 22, "unit": unit, "condition": "Partly cloudy"}

    result = get_weather(**args)
    print(f"Function called: {tool_call.name}({args})")
    print(f"Result: {result}")
4

Send the result back and get the final response

Build a new input that includes the original messages, the function call from the response, and your function result. Then call the API again.
final_response = client.responses.create(
    model="mavera-1",
    input=[
        *input_messages,
        {"type": "function_call", "name": tool_call.name, "call_id": tool_call.call_id, "arguments": tool_call.arguments},
        {"type": "function_call_output", "call_id": tool_call.call_id, "output": json.dumps(result)},
    ],
    tools=tools,
)

print(final_response.output[0].content[0].text)
# "It's currently 22°C and partly cloudy in Paris."
That’s the full cycle. The model saw your function definition, decided to call it, you executed it, sent the output back, and the model produced a natural answer.

Defining Functions

Each tool definition lives inside a tools array entry. The function fields are defined at the top level of the tool object using JSON Schema for parameters.

Function Definition Fields

FieldTypeRequiredDescription
typestringYesAlways "function"
namestringYesThe function name the model will reference. Use snake_case.
descriptionstringYesPlain-English explanation of what the function does. The model reads this to decide when to call it.
parametersobjectYesA JSON Schema object describing the function’s arguments.
strictbooleanNoWhen true, the model’s arguments are guaranteed to match your schema exactly. See Strict Mode.

Writing Good Descriptions

The description field is the most important part. The model reads it to decide when to call the function. Be specific. Say what the function returns.

Complex Schema Example

Here’s a search_products function with filters, enums, and nested objects:
{
  "type": "function",
  "name": "search_products",
  "description": "Search the product catalog. Returns up to 10 matching products with name, price, rating, and stock status.",
  "strict": true,
  "parameters": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "Free-text search query, e.g. 'wireless headphones'"
      },
      "category": {
        "type": "string",
        "enum": ["electronics", "clothing", "home", "sports", "books"],
        "description": "Product category to filter by"
      },
      "price_range": {
        "type": "object",
        "description": "Min and max price filter in USD",
        "properties": {
          "min": { "type": "number", "description": "Minimum price" },
          "max": { "type": "number", "description": "Maximum price" }
        },
        "required": ["min", "max"],
        "additionalProperties": false
      },
      "sort_by": {
        "type": "string",
        "enum": ["relevance", "price_asc", "price_desc", "rating"],
        "description": "Sort order for results"
      }
    },
    "required": ["query", "category", "price_range", "sort_by"],
    "additionalProperties": false
  }
}
Use enum wherever the set of valid values is known. This constrains the model’s output and prevents hallucinated argument values.

Handling Tool Calls

The model can return zero, one, or multiple function calls in a single response. Your code needs to handle all three cases. When the model calls functions, the output array contains items with type: "function_call". Each has a name, arguments (a JSON string), and a call_id (reference it when returning results).

Dispatch Pattern for Multiple Tools

When you offer several functions, you need a dispatcher. Map function names to callables, loop through every function call, and send all results back in one request.
import json
from openai import OpenAI

client = OpenAI(
    api_key="mvra_live_your_key_here",
    base_url="https://app.mavera.io/api/v1",
)

def get_weather(location, unit="celsius"):
    return {"temperature": 22, "unit": unit, "condition": "Sunny"}

def get_stock_price(symbol):
    return {"symbol": symbol, "price": 182.52, "currency": "USD"}

available_functions = {
    "get_weather": get_weather,
    "get_stock_price": get_stock_price,
}

input_messages = [{"role": "user", "content": "Weather in Tokyo and current AAPL price?"}]

response = client.responses.create(
    model="mavera-1", input=input_messages, tools=tools,
)

function_calls = [item for item in response.output if item.type == "function_call"]

if function_calls:
    follow_up = list(input_messages)
    for tc in function_calls:
        fn = available_functions.get(tc.name)
        args = json.loads(tc.arguments)
        result = fn(**args) if fn else {"error": f"Unknown function: {tc.name}"}

        follow_up.append({"type": "function_call", "name": tc.name, "call_id": tc.call_id, "arguments": tc.arguments})
        follow_up.append({"type": "function_call_output", "call_id": tc.call_id, "output": json.dumps(result)})

    final = client.responses.create(
        model="mavera-1", input=follow_up, tools=tools,
    )
    print(final.output[0].content[0].text)
When the model calls multiple functions, send all results back in a single request. Each function_call_output must reference the correct call_id.

When No Tool is Called

Sometimes the model answers directly — status is "completed" and the output contains text content. Always check before dispatching:
function_calls = [item for item in response.output if item.type == "function_call"]
if function_calls:
    ...  # dispatch pattern above
else:
    print(response.output[0].content[0].text)

Strict Mode

Set strict: true to guarantee the model’s arguments match your JSON Schema exactly. No extra fields, no missing fields, correct types every time.
{
  "type": "function",
  "name": "create_order",
  "description": "Place a new order for a customer",
  "strict": true,
  "parameters": {
    "type": "object",
    "properties": {
      "product_id": { "type": "string" },
      "quantity": { "type": "integer" },
      "shipping_method": {
        "type": "string",
        "enum": ["standard", "express", "overnight"]
      }
    },
    "required": ["product_id", "quantity", "shipping_method"],
    "additionalProperties": false
  }
}

Requirements

Strict mode has two hard requirements:
  1. All properties must be in required. If a field is optional, use a union type: {"type": ["string", "null"]}.
  2. additionalProperties must be false at every level — root and nested objects.
Without strict mode, the model usually follows your schema — but “usually” isn’t good enough for production. Strict mode eliminates argument parsing surprises entirely.
Always use strict mode for production workloads. The small upfront cost of listing every field as required pays for itself in reliability.

Tool Choice

The tool_choice parameter controls whether and how the model uses your functions.

Options

ValueBehavior
"auto"Default. The model decides whether to call a function or respond with text.
"required"The model must call at least one function. It won’t respond with text only.
"none"The model cannot call any function, even if tools are defined. Useful for follow-up turns where you want a text-only answer.
{"type": "function", "name": "..."}Force a specific function. The model must call exactly this function.

Code Examples

tool_choice = "auto"
tool_choice = "required"
tool_choice = "none"
tool_choice = {"type": "function", "name": "get_weather"}

response = client.responses.create(
    model="mavera-1", input=input_messages, tools=tools, tool_choice=tool_choice,
)
Use "required" when you know the user’s intent maps to a function — for example, in a structured-input UI where every submission should trigger an action.

Using with Personas

Function calling and personas are a powerful combination. The persona shapes how the model uses your functions and how it frames the results. The same function call, filtered through different personas, produces different responses.

Example: Customer Support Bot

Imagine a support bot for an e-commerce store. You define two functions — lookup_order and process_refund — and attach a frustrated-customer persona. The persona influences which functions the model calls, when it calls them, and how it communicates the results.
import json
from openai import OpenAI

client = OpenAI(
    api_key="mvra_live_your_key_here",
    base_url="https://app.mavera.io/api/v1",
)

support_tools = [
    {
        "type": "function",
        "name": "lookup_order",
        "description": "Look up an order by ID. Returns status, items, and shipping info.",
        "parameters": {
            "type": "object",
            "properties": {
                "order_id": {"type": "string", "description": "e.g. ORD-12345"},
            },
            "required": ["order_id"],
        },
    },
    {
        "type": "function",
        "name": "process_refund",
        "description": "Process a full or partial refund. Returns confirmation and timeline.",
        "parameters": {
            "type": "object",
            "properties": {
                "order_id": {"type": "string"},
                "reason": {"type": "string", "enum": ["damaged", "wrong_item", "late_delivery", "changed_mind"]},
            },
            "required": ["order_id", "reason"],
        },
    },
]

input_messages = [
    {"role": "user", "content": "Order ORD-98765 was supposed to arrive 5 days ago. I'm really frustrated."},
]

response = client.responses.create(
    model="mavera-1",
    input=input_messages,
    instructions="You are a customer support agent. Be empathetic and resolve issues quickly.",
    tools=support_tools,
    extra_body={"persona_id": "frustrated_customer_support"},
)

dispatch = {
    "lookup_order": lambda order_id, **kw: {
        "order_id": order_id, "status": "in_transit",
        "expected_delivery": "2026-03-13", "carrier": "FedEx", "tracking": "FX123456789",
    },
    "process_refund": lambda order_id, reason, **kw: {
        "refund_id": "REF-44321", "amount": 89.99,
        "status": "processed", "estimated_credit": "3-5 business days",
    },
}

function_calls = [item for item in response.output if item.type == "function_call"]

if function_calls:
    follow_up = list(input_messages)
    for tc in function_calls:
        result = dispatch[tc.name](**json.loads(tc.arguments))
        follow_up.append({"type": "function_call", "name": tc.name, "call_id": tc.call_id, "arguments": tc.arguments})
        follow_up.append({"type": "function_call_output", "call_id": tc.call_id, "output": json.dumps(result)})

    final = client.responses.create(
        model="mavera-1", input=follow_up,
        instructions="You are a customer support agent. Be empathetic and resolve issues quickly.",
        tools=support_tools,
        extra_body={"persona_id": "frustrated_customer_support"},
    )
    print(final.output[0].content[0].text)
With the frustrated_customer_support persona, the model will proactively look up the order, acknowledge the delay with empathy, and likely offer a resolution — possibly calling process_refund before the customer even asks. A different persona (say, a corporate-formal one) would use the same functions but frame the response differently.

Best Practices

The model decides which function to call based on the description field. Vague descriptions lead to wrong calls. Include what the function does, what it returns, and any constraints.
// Vague — the model has to guess
{ "description": "Get data" }

// Specific — the model knows exactly when to use it
{ "description": "Retrieve a customer's order history by email address. Returns the 10 most recent orders with status, total, and date." }
Every function definition consumes tokens in the prompt. More functions means higher latency and cost — and the model has more room to pick the wrong one. If you have many functions, group them by use case and only send the relevant subset per request.
Without strict: true, the model might omit optional fields, add unexpected fields, or use wrong types. Strict mode guarantees the output matches your schema. The tradeoff — listing every field as required — is minimal.
If your code already knows the user_id or session_id, don’t include those as function parameters for the model to fill. Pass them directly in your dispatch logic. This reduces errors and keeps schemas simpler.
def dispatch(tool_call, current_user_id):
    args = json.loads(tool_call.arguments)
    args["user_id"] = current_user_id
    return available_functions[tool_call.name](**args)
If two functions are always called together (e.g., get_customer then get_orders), consider merging them into a single get_customer_with_orders function. Fewer round-trips means lower latency and cost.
The function result goes back into the prompt as tokens. Return only the fields the model needs to formulate a response. Don’t dump an entire database row if the model only needs three fields.
If your function fails, return a structured error message — not a stack trace. The model can use a clean error to generate a helpful response to the user.
def lookup_order(order_id):
    order = db.find(order_id)
    if not order:
        return {"error": "Order not found", "order_id": order_id}
    return {"order_id": order.id, "status": order.status}

See Also

Responses API

Full reference for the responses API — streaming, analysis mode, structured outputs, and more

Personas

50+ pre-built personas to combine with function calling for audience-aware tool use

Authentication

Set up your API key and configure the SDK

API Reference

Complete Responses API endpoint specification