This example demonstrates how to integrate Mastra agents with Galileo Observability using OpenInference semantic conventions transmitted via the OpenTelemetry Protocol (OTLP).
Mastra uses the Vercel AI SDK (@ai-sdk/openai), not the official OpenAI SDK. This means:
❌ Galileo's wrapOpenAI() doesn't work with Mastra
wrapOpenAIwraps the OpenAI SDK (openaipackage)- Mastra uses Vercel AI SDK (
@ai-sdk/openai) - These are incompatible - different APIs, different architectures
✅ Solution: Use Mastra's AI Tracing with OpenInference
- Mastra's
ArizeExportersends traces using OpenInference conventions - Galileo accepts OpenInference traces via its OTLP endpoint
- Automatic instrumentation - no manual wrapping needed
This integration shows you how to get Galileo observability for Mastra applications.
Galileo has official documentation for Vercel AI SDK, but that's for direct Vercel AI SDK usage (without Mastra). If you're using Mastra's agent framework, you should follow this integration instead because:
- Mastra already wraps and instruments Vercel AI SDK calls via AI Tracing
- Trying to use both approaches would instrument the same calls twice
- Mastra's
ArizeExporteris designed to work with Mastra's agent/workflow abstractions
Mastra will automatically capture and send to Galileo:
- ✅ Token metrics (input tokens, output tokens, total tokens)
- ✅ Latency metrics (request duration, TTFT)
- ✅ LLM interactions (prompts, completions, model info)
- ✅ Agent operations (tool calls, decision paths)
- ✅ Workflow execution (step-by-step traces)
If you don't have a Galileo account yet, sign up at https://app.galileo.ai/ to get your API key, project, and log stream.
npm installCopy env.example to .env and fill in your credentials:
cp env.example .envRequired environment variables:
OPENAI_API_KEY- Your OpenAI API keyGALILEO_API_KEY- Your Galileo API keyGALILEO_PROJECT- Your Galileo project nameGALILEO_LOG_STREAM- Your Galileo log stream name
We provide three examples with increasing complexity:
npm run quick-start
# Or: npx tsx quick-start.tsMinimal setup (~80 lines). Perfect for getting started.
npm start
# Or: npx tsx mastra-galileo-integration.tsComplete setup with logging and best practices.
npm run advanced
# Or: npx tsx advanced-workflow-example.tsMulti-agent coordination, workflows, and tools.
┌─────────────┐ ┌─────────────┐ ┌──────────────┐
│ Mastra │─────▶│ OpenAI │ │ Galileo │
│ Agent │ │ API │ │ Dashboard │
└─────────────┘ └─────────────┘ └──────────────┘
│ ▲
│ │
└───────────────────────────────────────────┘
OTEL Traces (auto-captured)
- Mastra Agent makes LLM calls using Vercel AI SDK
- Mastra's Telemetry automatically captures:
- Token usage (input/output)
- Request/response timing
- Model and configuration
- Full conversation context
- OTEL Exporter sends traces to Galileo's endpoint
- Galileo Dashboard displays metrics and traces
The integration is configured in the Mastra instance using AI Tracing with the OTEL exporter:
import { ArizeExporter } from "@mastra/arize";
import { LibSQLStore } from "@mastra/libsql";
export const mastra = new Mastra({
storage: new LibSQLStore({ url: "file:./mastra.db" }),
observability: {
configs: {
galileo: {
serviceName: "mastra-app",
exporters: [
new ArizeExporter({
endpoint: "https://api.galileo.ai/otel/traces",
headers: {
"Galileo-API-Key": process.env.GALILEO_API_KEY,
"project": process.env.GALILEO_PROJECT,
"logstream": process.env.GALILEO_LOG_STREAM,
},
}),
],
},
},
},
});The simplest possible setup in ~80 lines:
import { ArizeExporter } from "@mastra/arize";
const mastra = new Mastra({
storage: new LibSQLStore({ url: "file:./mastra.db" }),
observability: { /* Galileo config */ },
agents: { simpleAgent },
});What you'll see in Galileo:
- 4 spans (agent, step, gpt-4o, chunk)
- Token metrics on the
gpt-4ospan - Full trace hierarchy
Run it:
npm run quick-startProduction-ready configuration with:
- Structured logging (Pino)
- Environment variable management
- Detailed comments explaining each section
Run it:
npm startComplex orchestration showing:
- Multi-agent coordination: Research agent + Summary agent
- Custom tools: Weather API example
- Workflows: Multi-step execution with dependencies
- Complete tracing: Every agent, tool, and workflow step
What you'll see in Galileo:
Workflow Execution
├── Research Agent (gpt-4o) → 150 tokens
│ └── Tool: get-weather
├── Summary Agent (gpt-4o-mini) → 80 tokens
└── Total: 230 tokens
Run it:
npm run advancedImportant: Galileo requires OpenInference semantic conventions. Mastra's OtelExporter (which uses OpenTelemetry GenAI conventions) will not work - Galileo rejects those spans during validation.
Using Mastra's built-in AI Tracing provides:
- Automatic Instrumentation: No need to manually wrap functions or SDK clients
- Comprehensive Coverage: Captures agents, workflows, tools, and every LLM call
- Zero Configuration: Just add the exporter - Mastra handles the rest
- Rich Context: Distributed tracing across your entire agent execution
- OpenInference Compatible: Works with Galileo, Arize Phoenix, and other OpenInference platforms
If you're familiar with Galileo's wrapOpenAI() function, here's why this approach is different:
| Aspect | wrapOpenAI() |
Mastra AI Tracing |
|---|---|---|
| SDK Compatibility | OpenAI SDK only | Vercel AI SDK (any provider) |
| Instrumentation | Manual wrapper per client | Automatic for all agents |
| Agent Support | No | Yes - full agent orchestration |
| Workflow Support | No | Yes - multi-step workflows |
| Tool Usage | No | Yes - automatic tool tracing |
| Setup | Wrap each client | Configure once globally |
Bottom line: For Mastra-based applications, AI Tracing with ArizeExporter is the only way to integrate with Galileo.
All agents registered with Mastra are automatically traced:
const mastra = new Mastra({
agents: {
researchAgent,
summaryAgent,
reviewAgent,
},
telemetry: { /* ... */ },
});Mastra workflows are also automatically traced:
const workflow = createWorkflow({
id: "data-processing",
// ...
}).then(step1).then(step2);
await workflow.execute({ input: "data" });
// All steps are traced and sent to GalileoFor additional custom tracking:
import { trace } from "@opentelemetry/api";
const tracer = trace.getTracer("my-custom-tracer");
const span = tracer.startSpan("custom-operation");
// ... do work ...
span.end();Most common issue: Program exits before traces are sent
The OTEL exporter buffers spans and waits ~5 seconds after the root span completes before exporting. If your program exits immediately, traces won't be sent.
Solution: Add a delay before your program exits:
// After your agent calls
console.log("Flushing traces to Galileo...");
await new Promise(resolve => setTimeout(resolve, 6000));
console.log("Traces sent!");All the examples in this folder include this delay.
Other checks:
- Verify your API key and project/stream names are correct
- Check the OTEL endpoint is reachable
- Look for errors in console output
- Ensure
observabilityis properly configured with the OTEL exporter
- Token metrics are automatically captured by Mastra's telemetry
- Ensure
observability.default.enabledistrue - Check that storage is properly configured
To sample traces instead of capturing all:
telemetry: {
sampling: {
type: "ratio",
probability: 0.1, // Sample 10% of traces
},
}- OpenInference Semantic Conventions - Official spec for all OpenInference attributes (required reading!)
- Galileo's Vercel AI SDK Integration - For direct Vercel AI SDK usage without Mastra (different approach)
- Mastra Observability - Mastra observability docs
- Mastra AI Tracing - AI tracing guide
- Galileo Documentation - Galileo platform docs
- OpenTelemetry Specification - OTLP protocol docs