Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mavera.io/llms.txt

Use this file to discover all available pages before exploring further.

OpenAI’s model suite — GPT-5, GPT-4.1, o3/o4-mini for reasoning, Whisper for speech-to-text, TTS for text-to-speech, DALL-E and Sora for visual generation, and embedding models for vector search — pairs with Mavera to create multi-modal content pipelines. These five jobs chain OpenAI endpoints with Mavera surfaces to transcribe meetings into structured action plans, pre-process video with vision models, convert generated copy into audio libraries, find and fill content gaps via embeddings, and run video generation-analysis feedback loops.

API Reference Card

DetailValue
Base URLhttps://api.openai.com/v1
AuthBearer Token — Authorization: Bearer sk-...
Key modelsGPT-5, GPT-4.1, o3/o4-mini, Whisper, TTS, DALL-E 3, text-embedding-3-large, Sora
Rate limitsVary by model tier — TPM and RPM limits per organization (see OpenAI rate limits)
Mavera basehttps://app.mavera.io/api/v1
Mavera authAuthorization: Bearer mvra_live_...
All examples use two environment variables: OPENAI_API_KEY (your OpenAI platform key starting with sk-) and MAVERA_API_KEY (your Mavera key starting with mvra_live_). Never commit either key to version control. Use a .env file or your platform’s secret manager.

Prerequisites

1

OpenAI API key

Sign up at platform.openai.com. Navigate to API keys and create a new secret key. Ensure your organization has billing enabled and sufficient credits.
2

Mavera API key

Get your key from Mavera dashboard.
3

Install SDKs

# Python
pip install openai requests

# JavaScript
npm install openai
4

Set environment variables

export OPENAI_API_KEY="sk-your-openai-key"
export MAVERA_API_KEY="mvra_live_xxxxx"

Jobs

#JobOpenAI EndpointMavera SurfaceOutput
1Whisper Transcription → Meetings PipelinePOST /audio/transcriptionsMave Agent (Chat)Structured meeting analysis
2GPT Vision → Video Analysis Pre-ProcessingResponses (vision)Mave Agent (Chat)Marketing analysis of visual content
3TTS → Audio Content LibraryPOST /audio/speechGenerateAudio files from generated content
4Embeddings → Knowledge Base Gap FillingPOST /embeddingsGenerateGap-filling content
5Sora Video Generation → Analysis LoopPOST /images/generations (Sora)Mave Agent (Chat)Iterated video with quality scores

Rate Limits & Production Notes

OpenAI EndpointRate LimitStrategy
/audio/transcriptions (Whisper)50 RPMQueue audio files; 2s delay between calls
Responses (GPT-4.1 vision)500–10,000 RPM (tier-dependent)Batch frames; use detail: "low" to reduce tokens
/audio/speech (TTS)50 RPMSequential with 1s delay
/embeddings3,000 RPM / 1M TPMBatch up to 2,048 inputs per request
/images/generations (Sora)Tier-dependentSingle request with retry; expect 30-120s
Use OpenAI’s usage dashboard to monitor token and credit consumption in real time. Set monthly spend limits under Organization → Billing → Usage limits to prevent unexpected charges.
Rate limits vary by organization tier. New accounts start with lower limits that increase with usage history. For 429 errors, use exponential backoff: wait 1s, 2s, 4s, 8s between retries, capped at 60s. Whisper files over 25 MB must be split. TTS inputs over 4,096 characters must be chunked at sentence boundaries. Monitor Mavera credits at Dashboard.

All Integrations

OpenAI API Docs

Mave Agent

Generate

Brand Voice

Focus Groups