A powerful TypeScript library for building AI agents with multi-threaded conversations, tool execution, and event handling capabilities. Built on top of AWS Bedrock (Claude) with support for advanced features like attachments, tool calling, and event streaming.
- 🤖 AI Agent Management: Create and manage AI agents with knowledge bases, instructions, and memory
- 🧵 Multi-threaded Conversations: Handle parallel conversation threads with state management
- 🛠️ Tool Integration: Define and execute custom tools with type-safe parameters using Zod schemas
- 📎 Attachment Support: Handle various types of attachments (documents, images, videos)
- 🔄 Event Streaming: Built-in event system for real-time monitoring and response handling
- 💾 Memory Management: Track and persist conversation history and agent state
- 🔧 AWS Bedrock Integration: Native support for AWS Bedrock (Claude) with optimized prompting
- Quick Start
- Core Components
- Event System
- Best Practices
- License
- Contributing
- Follow-up Messages
- State Management
- State Management Patterns
- AWS Bedrock Integration
- Tool Decorators
- Streaming Support
- Reasoning Capabilities
import { Agent, Tool, BedrockThreadDriver } from '@flatfile/improv';
import { z } from 'zod';
// Create a custom tool
const calculatorTool = new Tool({
name: 'calculator',
description: 'Performs basic arithmetic operations',
parameters: z.object({
operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
a: z.number(),
b: z.number(),
}),
executeFn: async (args) => {
const { operation, a, b } = args;
switch (operation) {
case 'add': return a + b;
case 'subtract': return a - b;
case 'multiply': return a * b;
case 'divide': return a / b;
}
}
});
// Initialize the Bedrock driver
const driver = new BedrockThreadDriver({
model: 'anthropic.claude-3-haiku-20240307-v1:0',
temperature: 0.7,
});
// Create an agent
const agent = new Agent({
knowledge: [
{ fact: 'The agent can perform basic arithmetic operations.' }
],
instructions: [
{ instruction: 'Use the calculator tool for arithmetic operations.', priority: 1 }
],
tools: [calculatorTool],
driver,
});
// Create and use a thread
const thread = agent.createThread({
prompt: 'What is 25 multiplied by 4?',
onResponse: async (message) => {
console.log('Agent response:', message.content);
}
});
// Send the thread
await thread.send();
// Stream the response
const stream = await thread.stream();
for await (const text of stream) {
process.stdout.write(text); // Print each chunk as it arrives
}
Improv supports advanced reasoning capabilities through the reasoning_config
option in the thread driver. This allows the AI to perform step-by-step reasoning before providing a final answer.
import { Agent, BedrockThreadDriver } from '@flatfile/improv';
const driver = new BedrockThreadDriver({
model: 'anthropic.claude-3-7-sonnet-20250219-v1:0',
temperature: 1,
reasoning_config: {
budget_tokens: 1024,
type: 'enabled',
},
});
const agent = new Agent({
driver,
});
const thread = agent.createThread({
systemPrompt: 'You are a helpful assistant that can answer questions about the world.',
prompt: 'How many people will live in the world in 2040?',
});
const result = await thread.send();
console.log(result.last());
This example enables the AI to work through its reasoning process with a token budget of 1024 tokens before providing a final answer about population projections.
The main agent class that manages knowledge, instructions, tools, and conversation threads.
const agent = new Agent({
knowledge?: AgentKnowledge[], // Array of facts with optional source and timestamp
instructions?: AgentInstruction[], // Array of prioritized instructions
memory?: AgentMemory[], // Array of stored thread histories
systemPrompt?: string, // Base system prompt
tools?: Tool[], // Array of available tools
driver: ThreadDriver, // Thread driver implementation
evaluators?: Evaluator[] // Array of evaluators for response processing
});
Manages a single conversation thread with message history and tool execution.
const thread = new Thread({
messages?: Message[], // Array of conversation messages
tools?: Tool[], // Array of available tools
driver: ThreadDriver, // Thread driver implementation
toolChoice?: 'auto' | 'any', // Tool selection mode
maxSteps?: number // Maximum number of tool execution steps
});
Define custom tools that the agent can use during conversations.
const tool = new Tool({
name: string, // Tool name
description: string, // Tool description
parameters: z.ZodTypeAny, // Zod schema for parameter validation
followUpMessage?: string, // Optional message to guide response evaluation
executeFn: (args: Record<string, any>, toolCall: ToolCall) => Promise<any> // Tool execution function
});
Represents a single message in a conversation thread.
const message = new Message({
content?: string, // Message content
role: 'system' | 'user' | 'assistant' | 'tool', // Message role
toolCalls?: ToolCall[], // Array of tool calls
toolResults?: ToolResult[], // Array of tool results
attachments?: Attachment[], // Array of attachments
cache?: boolean // Whether to cache the message
});
The library uses an event-driven architecture. All major components extend EventSource
, allowing you to listen for various events:
// Agent events
agent.on('agent.thread-added', ({ agent, thread }) => {});
agent.on('agent.thread-removed', ({ agent, thread }) => {});
agent.on('agent.knowledge-added', ({ agent, knowledge }) => {});
agent.on('agent.instruction-added', ({ agent, instruction }) => {});
// Thread events
thread.on('thread.response', ({ thread, message }) => {});
thread.on('thread.max_steps_reached', ({ thread, steps }) => {});
// Tool events
tool.on('tool.execution.started', ({ tool, name, args }) => {});
tool.on('tool.execution.completed', ({ tool, name, args, result }) => {});
tool.on('tool.execution.failed', ({ tool, name, args, error }) => {});
-
Tool Design
- Keep tools atomic and focused on a single responsibility
- Use Zod schemas for robust parameter validation
- Implement proper error handling in tool execution
- Use follow-up messages to guide response evaluation
-
Thread Management
- Set appropriate
maxSteps
to prevent infinite loops (default: 30) - Use
onResponse
handlers for processing responses - Clean up threads using
closeThread()
when done - Monitor thread events for debugging and logging
- Set appropriate
-
Agent Management
- Prioritize instructions using the priority field
- Use knowledge facts to provide context
- Implement evaluators for complex workflows
- Leverage the memory system for persistence
-
Event Handling
- Subscribe to relevant events for monitoring and logging
- Use event data for analytics and debugging
- Implement proper error handling in event listeners
- Forward events with appropriate context
MIT
Contributions are welcome! Please read our contributing guidelines for details.
Tools can include follow-up messages that guide the AI's evaluation of tool responses. This is particularly useful for:
- Providing context for tool results
- Guiding the AI's interpretation of data
- Maintaining consistent response patterns
- Suggesting next steps or actions
const tool = new Tool({
name: 'dataAnalyzer',
description: 'Analyzes data and returns insights',
parameters: z.object({
data: z.array(z.any()),
metrics: z.array(z.string())
}),
followUpMessage: `Review the analysis results:
1. What are the key insights from the data?
2. Are there any concerning patterns?
3. What actions should be taken based on these results?`,
executeFn: async (args) => {
// Tool implementation
}
});
The library provides several mechanisms for managing state:
- Knowledge base for storing facts
- Prioritized instructions for behavior guidance
- Memory system for storing thread histories
- System prompt for base context
- Message history tracking
- Tool execution state
- Maximum step limits
- Response handlers
- Parameter validation
- Execution tracking
- Result processing
- Event emission
Evaluators provide a way to process and validate agent responses:
const evaluator: Evaluator = async ({ thread, agent }, complete) => {
// Process the thread response
const lastMessage = thread.last();
if (lastMessage?.content.includes('done')) {
complete(); // Signal completion
} else {
// Continue processing
thread.send(new Message({
content: 'Please continue with the task...'
}));
}
};
const agent = new Agent({
// ... other options ...
evaluators: [evaluator]
});
Evaluators can:
- Process agent responses
- Trigger additional actions
- Control conversation flow
- Validate results
The three-keyed lock pattern is a state management pattern that ensures controlled flow through tool execution, evaluation, and completion phases. It's implemented as a reusable evaluator:
import { threeKeyedLockEvaluator } from '@flatfile/improv';
const agent = new Agent({
// ... other options ...
evaluators: [
threeKeyedLockEvaluator({
evalPrompt: "Are there other items to process? If not, say 'done'",
exitPrompt: "Please provide a final summary of all actions taken."
})
]
});
The pattern works through three distinct states:
stateDiagram-v2
[*] --> ToolExecution
state "Tool Execution" as ToolExecution {
[*] --> Running
Running --> Complete
Complete --> [*]
}
state "Evaluation" as Evaluation {
[*] --> CheckMore
CheckMore --> [*]
}
state "Summary" as Summary {
[*] --> Summarize
Summarize --> [*]
}
ToolExecution --> Evaluation: Non-tool response
Evaluation --> ToolExecution: Tool called
Evaluation --> Summary: No more items
Summary --> [*]: Complete
note right of ToolExecution
isEvaluatingTools = true
Handles tool execution
end note
note right of Evaluation
isEvaluatingTools = false
nextMessageIsSummary = false
Checks for more work
end note
note right of Summary
nextMessageIsSummary = true
Gets final summary
end note
The evaluator manages these states through:
-
Tool Execution State
- Tracks when tools are being executed
- Resets state when new tools are called
- Handles multiple tool executions
-
Evaluation State
- Triggered after tool completion
- Prompts for more items to process
- Can return to tool execution if needed
-
Summary State
- Final state before completion
- Gathers summary of actions
- Signals completion when done
Key features:
- Automatic state transitions
- Event-based flow control
- Clean event listener management
- Configurable prompts
- Support for multiple tool executions
Example usage with custom prompts:
const workflowAgent = new Agent({
// ... agent configuration ...
evaluators: [
threeKeyedLockEvaluator({
evalPrompt: "Review the results. Should we process more items?",
exitPrompt: "Provide a detailed summary of all processed items."
})
]
});
// The evaluator will automatically:
// 1. Let tools execute freely
// 2. After each tool completion, check if more processing is needed
// 3. When no more items need processing, request a final summary
// 4. Complete the evaluation after receiving the summary
This pattern is particularly useful for:
- Processing multiple items sequentially
- Workflows requiring validation between steps
- Tasks with dynamic tool usage
- Operations requiring final summaries
The library uses AWS Bedrock (Claude) as its LLM provider. Configure your AWS credentials:
// Required environment variables
process.env.AWS_ACCESS_KEY_ID = 'your-access-key';
process.env.AWS_SECRET_ACCESS_KEY = 'your-secret-key';
process.env.AWS_REGION = 'your-region';
// Initialize the driver
const driver = new BedrockThreadDriver({
model: 'anthropic.claude-3-haiku-20240307-v1:0', // Default model
temperature?: number, // Default: 0.7
maxTokens?: number, // Default: 4096
cache?: boolean // Default: false
});
The library provides decorators for creating tools directly on agent classes:
class CustomAgent extends Agent {
@ToolName("sampleData")
@ToolDescription("Sample the original data with the mapping program")
private async sampleData(
@ToolParam("count", "Number of records to sample", z.number())
count: number,
@ToolParam("seed", "Random seed", z.number().optional())
seed?: number
): Promise<any> {
return { count, seed };
}
}
This provides:
- Type-safe tool definitions
- Automatic parameter validation
- Clean method-based tools
- Integrated error handling
The library provides built-in support for streaming responses from the AI model. Keys features:
- Real-time text chunks as they're generated
- Automatic message management in the thread
- Event emission for stream progress
- Error handling and recovery
- Compatible with all thread features including tool calls
const thread = agent.createThread({
prompt: 'What is 25 multiplied by 4?',
});
const stream = await thread.stream();
for await (const text of stream) {
process.stdout.write(text);
}
// The final response is also available in the thread
console.log(thread.last()?.content);