Package Exports
- @mastra/core
- @mastra/core/a2a
- @mastra/core/action
- @mastra/core/agent
- @mastra/core/agent/input-processor
- @mastra/core/agent/input-processor/processors
- @mastra/core/agent/message-list
- @mastra/core/agent/save-queue
- @mastra/core/agent/workflows/prepare-stream
- @mastra/core/ai-tracing
- @mastra/core/ai-tracing/exporters
- @mastra/core/ai-tracing/span_processors
- @mastra/core/ai-tracing/spans
- @mastra/core/ai-tracing/tracers
- @mastra/core/base
- @mastra/core/bundler
- @mastra/core/cache
- @mastra/core/deployer
- @mastra/core/di
- @mastra/core/error
- @mastra/core/eval
- @mastra/core/events
- @mastra/core/hooks
- @mastra/core/integration
- @mastra/core/llm
- @mastra/core/llm/model
- @mastra/core/llm/model/gateways
- @mastra/core/logger
- @mastra/core/loop
- @mastra/core/loop/network
- @mastra/core/loop/telemetry
- @mastra/core/loop/workflows/agentic-execution
- @mastra/core/loop/workflows/agentic-loop
- @mastra/core/mastra
- @mastra/core/mcp
- @mastra/core/memory
- @mastra/core/network/vNext
- @mastra/core/package.json
- @mastra/core/processors
- @mastra/core/processors/processors
- @mastra/core/relevance
- @mastra/core/relevance/cohere
- @mastra/core/relevance/mastra-agent
- @mastra/core/runtime-context
- @mastra/core/scores
- @mastra/core/scores/run-experiment
- @mastra/core/scores/scoreTraces
- @mastra/core/server
- @mastra/core/storage
- @mastra/core/storage/domains
- @mastra/core/storage/domains/legacy-evals
- @mastra/core/storage/domains/memory
- @mastra/core/storage/domains/observability
- @mastra/core/storage/domains/operations
- @mastra/core/storage/domains/scores
- @mastra/core/storage/domains/traces
- @mastra/core/storage/domains/workflows
- @mastra/core/stream
- @mastra/core/stream/aisdk/v5/compat
- @mastra/core/stream/base
- @mastra/core/telemetry
- @mastra/core/telemetry/otel-vendor
- @mastra/core/test-utils/llm-mock
- @mastra/core/tools
- @mastra/core/tools/is-vercel-tool
- @mastra/core/tts
- @mastra/core/types
- @mastra/core/utils
- @mastra/core/utils/zod-to-json
- @mastra/core/vector
- @mastra/core/vector/filter
- @mastra/core/voice
- @mastra/core/workflows
- @mastra/core/workflows/_constants
- @mastra/core/workflows/evented
- @mastra/core/workflows/evented/workflow-event-processor
- @mastra/core/workflows/legacy
Readme
@mastra/core
Mastra is a framework for building AI-powered applications and agents with a modern TypeScript stack.
It includes everything you need to go from early prototypes to production-ready applications. Mastra integrates with frontend and backend frameworks like React, Next.js, and Node, or you can deploy it anywhere as a standalone server. It's the easiest way to build, tune, and scale reliable AI products.
This is the @mastra/core package, which includes the main functionality of Mastra, including agents, workflows, tools, memory, and tracing.
Installation
@mastra/core is an essential building block for a Mastra application and you most likely don't want to use it as a standalone package. Therefore we recommend following the installation guide to get started with Mastra.
You can install the package like so:
npm install @mastra/coreCore Components
Mastra (
/mastra) - Central orchestration class that initializes and coordinates all Mastra components. Provides dependency injection for agents, workflows, tools, memory, storage, and other services through a unified configuration interface. Learn more about MastraAgents (
/agent) - Autonomous AI entities that understand instructions, use tools, and complete tasks. Encapsulate LLM interactions with conversation history, tool execution, memory integration, and behavioral guidelines. Learn more about AgentsWorkflows (
/workflows) - Graph-based execution engine for chaining, branching, and parallelizing LLM calls. Orchestrates complex AI tasks with state management, error recovery, and conditional logic. Learn more about WorkflowsTools (
/tools) - Functions that agents can invoke to interact with external systems. Each tool has a schema and description enabling AI to understand and use them effectively. Supports custom tools, toolsets, and runtime context. Learn more about ToolsMemory (
/memory) - Thread-based conversation persistence with semantic recall and working memory capabilities. Stores conversation history, retrieves contextually relevant information, and maintains agent state across interactions. Learn more about MemoryMCP (
/mcp) - Model Context Protocol integration enabling external tool sources. Supports SSE, HTTP, and Hono-based MCP servers with automatic tool conversion and registration. Learn more about MCPAI Tracing (
/ai-tracing) - Type-safe observability system tracking AI operations through spans. Provides OpenTelemetry-compatible tracing with event-driven exports, flexible sampling, and pluggable processors for real-time monitoring. Learn more about AI TracingStorage (
/storage) - Pluggable storage layer with standardized interfaces for multiple backends. Supports PostgreSQL, LibSQL, MongoDB, and other databases for persisting agent data, memory, and workflow state. Learn more about StorageVector (
/vector) - Vector operations and embedding management for semantic search. Provides unified interface for vector stores with filtering capabilities and similarity search. Learn more about VectorServer (
/server) - HTTP server implementation built on Hono with OpenAPI support. Provides custom API routes, middleware, authentication, and runtime context for deploying Mastra as a standalone service. Learn more about ServerVoice (
/voice) - Voice interaction capabilities with text-to-speech and speech-to-text integration. Supports multiple voice providers and real-time voice communication for agents. Learn more about Voice