Airtrain is a lightweight agent and LLM framework for building, testing, and monitoring AI systems. This is the official Node.js implementation of Airtrain.
- 🔄 Cross-platform: JavaScript/TypeScript implementation of Airtrain, compatible with the Python version
- 🧩 Modular: Build AI agents from composable skills with a simple, consistent API
- 💾 Secure credentials: Robust credential management for API keys and secrets
- 📊 Telemetry: Built-in telemetry for monitoring and improving your AI systems
- 🚀 Model Integrations: Support for multiple model providers including Fireworks AI
- 🛠 Type-Safe: Fully typed with TypeScript for better developer experience
npm install airtrain
import { Skill, telemetry } from 'airtrain';
import { FireworksClient } from 'airtrain/integrations/fireworks';
// Initialize with your API key (or use environment variables)
const client = new FireworksClient({
apiKey: process.env.FIREWORKS_API_KEY
});
// Create a skill
const answerQuestion = new Skill({
name: "answerQuestion",
description: "Answers general knowledge questions",
execute: async ({ input }) => {
const response = await client.createChatCompletion([
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: input }
]);
return response.choices[0]?.message?.content || "No response from model";
}
});
// Use the skill
async function main() {
const result = await answerQuestion.process("What is the capital of France?");
console.log(result);
}
main().catch(console.error);
The library uses Zod for validation:
import { BaseSchema } from 'airtrain';
import { z } from 'zod';
// Define a schema for your skill's input
const MyInputSchema = BaseSchema.extend({
question: z.string().nonempty(),
context: z.string().optional()
});
// Type inference works automatically
type MyInput = z.infer<typeof MyInputSchema>;
Securely manage API keys and other sensitive information:
import { Credentials } from 'airtrain';
// Create credentials manager
const credentials = new Credentials({
configPath: './config',
encryptionKey: process.env.ENCRYPTION_KEY
});
// Save an API key
await credentials.set('fireworks', { apiKey: 'your-api-key' });
// Use the credentials
const apiKey = (await credentials.get('fireworks')).apiKey;
Build modular, reusable components:
import { Skill } from 'airtrain';
import { z } from 'zod';
const translator = new Skill({
name: "translator",
description: "Translates text to another language",
input_schema: z.object({
text: z.string(),
target_language: z.string()
}),
execute: async ({ input }) => {
// Implementation using your preferred LLM
return translatedText;
}
});
The library includes integrations with popular LLM providers:
-
Fireworks AI: High-performance, cost-effective LLMs
import { FireworksClient } from 'airtrain/integrations/fireworks'; const client = new FireworksClient({ apiKey: process.env.FIREWORKS_API_KEY, defaultModel: "accounts/fireworks/models/llama-v3-70b-instruct" }); const response = await client.createChatCompletion([ { role: "user", content: "Hello!" } ]);
Explore the examples directory for detailed usage examples:
- Basic skills: How to create and use skills
- Fireworks integration: Chat, streaming, function calling examples
- Credential management: Secure storage and retrieval of credentials
This package includes end-to-end tests for integrations with AI providers. These tests demonstrate how to use the library to interact with different AI models.
Before running the tests, set up your API keys in environment variables or a .env
file in the root directory:
# .env file
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
FIREWORKS_API_KEY=your_fireworks_api_key
GROQ_API_KEY=your_groq_api_key
To run all end-to-end tests:
./examples/e2e/test_all.sh
To run tests for a specific provider:
# For OpenAI
npx ts-node examples/e2e/openai-e2e.ts
# For Anthropic
npx ts-node examples/e2e/anthropic-e2e.ts
# For Fireworks
npx ts-node examples/e2e/fireworks-e2e.ts
# For Groq
npx ts-node examples/e2e/groq-e2e.ts
The tests demonstrate:
- Basic chat completion
- Tool/function calling
- Streaming responses
- Image analysis (OpenAI and Anthropic)
- System prompts (Anthropic)
- Error handling
# Clone the repository
git clone <repository-url>
cd airtrain-node
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Run with coverage
npm run test:coverage
The library includes a dedicated script for publishing to NPM:
# Create a .env file with your NPM token
cp .env.example .env
# Edit the .env file to add your NPM_TOKEN
# Run the publishing script
npm run publish-npm
The script will:
- Validate your package.json
- Check git status and branch
- Let you choose a version bump (patch, minor, major)
- Build and test the package
- Publish to NPM with your token
For a dry run (without actually publishing):
npm run publish-npm -- --dry-run
MIT
Contributions are welcome! Please see CONTRIBUTING.md for details.
The MCPManager
is a TypeScript class that manages Model Context Protocol (MCP) servers and their interactions. It provides functionality for server configuration management, connection handling, and tool execution.
- Installation
- Basic Usage
- Server Configuration
- Configuration Management
- Server Operations
- Tool Execution
- Event Handling
npm install airtrain-node
import { MCPManager } from 'airtrain-node';
// Create a new manager instance
const manager = new MCPManager();
// Add a server configuration
manager.addServer({
id: 'perplexity-ask',
command: 'docker',
args: ['run', '-i', '--rm', '-e', 'PERPLEXITY_API_KEY', 'mcp/perplexity-ask'],
env: {
PERPLEXITY_API_KEY: 'your-api-key'
}
});
// Connect to the server
await manager.connect('perplexity-ask');
// List available tools
const tools = await manager.listTools('perplexity-ask');
// Call a tool
const result = await manager.callTool('perplexity-ask', 'tool-name', { arg1: 'value1' });
interface MCPServerConfig {
id: string; // Unique identifier for the server
command: string; // Command to run the server
args: string[]; // Arguments for the command
env?: Record<string, string>; // Optional environment variables
}
The MCPManager supports loading server configurations from various sources:
- From JSON Object:
const config = {
mcpServers: {
'server-id': {
command: 'command',
args: ['arg1', 'arg2'],
env: { KEY: 'value' }
}
}
};
manager.loadFromJSON(config);
- From JSON String:
const configStr = '{"mcpServers": {...}}';
manager.loadFromString(configStr);
- From File:
manager.loadFromFile('mcp-config.json');
Configurations can be saved in various formats:
- To JSON Object:
const config = manager.dumpToJSON();
- To JSON String:
const configStr = manager.dumpToString();
- To File:
manager.dumpToFile('mcp-config.json');
You can combine multiple configurations using different strategies for handling duplicate server IDs:
enum DuplicateKeyStrategy {
KEEP_ORIGINAL = 'keep_original', // Keep existing server (default)
OVERWRITE = 'overwrite', // Replace with new server
RENAME = 'rename' // Keep both by appending UUID
}
const otherConfig = {
mcpServers: {
'server-id': {
command: 'new-command',
args: ['new-arg']
}
}
};
// Keep original servers (default)
manager.combineServers(otherConfig);
// Overwrite existing servers
manager.combineServers(otherConfig, DuplicateKeyStrategy.OVERWRITE);
// Rename duplicate servers
manager.combineServers(otherConfig, DuplicateKeyStrategy.RENAME);
// Add a server
manager.addServer({
id: 'server-id',
command: 'command',
args: []
});
// Remove a server
manager.removeServer('server-id');
// Get all servers
const servers = manager.getServers();
// Connect to a server
await manager.connect('server-id');
// Disconnect from a server
await manager.disconnect('server-id');
// Clean up all connections
await manager.dispose();
// List available tools
const tools = await manager.listTools('server-id');
// List available resources
const resources = await manager.listResources('server-id');
// List available prompts
const prompts = await manager.listPrompts('server-id');
// Call a tool
const result = await manager.callTool('server-id', 'tool-name', {
arg1: 'value1',
arg2: 'value2'
});
The MCPManager emits events for tool call success and failure:
// Listen for successful tool calls
manager.onToolCallSuccess((data) => {
console.log('Tool call succeeded:', data);
// data: {
// serverId: string;
// toolCall: {
// toolName: string;
// toolArgs: Record<string, any>;
// };
// result: any;
// }
});
// Listen for tool call errors
manager.onToolCallError((data) => {
console.error('Tool call failed:', data);
// data: {
// serverId: string;
// toolCall: {
// toolName: string;
// toolArgs: Record<string, any>;
// };
// error: string;
// }
});
The MCPManager provides detailed error messages for various failure scenarios:
- Invalid configuration format
- Server connection failures
- Tool execution errors
- Unsupported methods
- File operation errors
Example error handling:
try {
await manager.connect('non-existent-server');
} catch (error) {
console.error('Connection failed:', error.message);
}
try {
await manager.loadFromFile('non-existent.json');
} catch (error) {
console.error('Loading failed:', error.message);
}
-
Resource Cleanup: Always call
dispose()
when you're done with the manager to clean up connections and event listeners. -
Error Handling: Implement proper error handling for all async operations.
-
Configuration Management: Use the configuration management features to maintain server configurations separately from your code.
-
Event Handling: Set up error handlers to catch and handle tool execution failures appropriately.
import { MCPManager, DuplicateKeyStrategy } from 'airtrain-node';
async function main() {
const manager = new MCPManager();
try {
// Load configuration from file
manager.loadFromFile('mcp-config.json');
// Add error handling
manager.onToolCallError((error) => {
console.error('Tool call failed:', error);
});
// Connect to servers
for (const server of manager.getServers()) {
await manager.connect(server.id);
}
// Execute tools
const result = await manager.callTool('server-id', 'tool-name', {
param1: 'value1'
});
console.log('Tool execution result:', result);
} catch (error) {
console.error('Error:', error);
} finally {
// Clean up
await manager.dispose();
}
}
main().catch(console.error);