This is a demonstration of using AIGNE Framework to build a concurrency workflow. The example now supports both one-shot and interactive chat modes, along with customizable model settings and pipeline input/output.
flowchart LR
in(In)
out(Out)
featureExtractor(Feature Extractor)
audienceAnalyzer(Audience Analyzer)
aggregator(Aggregator)
in --> featureExtractor --> aggregator
in --> audienceAnalyzer --> aggregator
aggregator --> out
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class featureExtractor processing
class audienceAnalyzer processing
class aggregator processing
- Node.js (>=20.0) and npm installed on your machine
- An OpenAI API key for interacting with OpenAI's services
- Optional dependencies (if running the example from source code):
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY # Set your OpenAI API key
# Run in one-shot mode (default)
npx -y @aigne/example-workflow-concurrency
# Run in interactive chat mode
npx -y @aigne/example-workflow-concurrency --chat
# Use pipeline input
echo "Analyze product: Smart home assistant with voice control and AI learning capabilities" | npx -y @aigne/example-workflow-concurrency
git clone https://github.com/AIGNE-io/aigne-framework
cd aigne-framework/examples/workflow-concurrency
pnpm install
Setup your OpenAI API key in the .env.local
file:
OPENAI_API_KEY="" # Set your OpenAI API key here
You can use different AI models by setting the MODEL
environment variable along with the corresponding API key. The framework supports multiple providers:
-
OpenAI:
MODEL="openai:gpt-4.1"
withOPENAI_API_KEY
-
Anthropic:
MODEL="anthropic:claude-3-7-sonnet-latest"
withANTHROPIC_API_KEY
-
Google Gemini:
MODEL="gemini:gemini-2.0-flash"
withGEMINI_API_KEY
-
AWS Bedrock:
MODEL="bedrock:us.amazon.nova-premier-v1:0"
with AWS credentials -
DeepSeek:
MODEL="deepseek:deepseek-chat"
withDEEPSEEK_API_KEY
-
OpenRouter:
MODEL="openrouter:openai/gpt-4o"
withOPEN_ROUTER_API_KEY
-
xAI:
MODEL="xai:grok-2-latest"
withXAI_API_KEY
-
Ollama:
MODEL="ollama:llama3.2"
withOLLAMA_DEFAULT_BASE_URL
For detailed configuration examples, please refer to the .env.local.example
file in this directory.
pnpm start # Run in one-shot mode (default)
# Run in interactive chat mode
pnpm start -- --chat
# Use pipeline input
echo "Analyze product: Smart home assistant with voice control and AI learning capabilities" | pnpm start
The example supports the following command-line parameters:
Parameter | Description | Default |
---|---|---|
--chat |
Run in interactive chat mode | Disabled (one-shot mode) |
--model <provider[:model]> |
AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
--temperature <value> |
Temperature for model generation | Provider default |
--top-p <value> |
Top-p sampling value | Provider default |
--presence-penalty <value> |
Presence penalty value | Provider default |
--frequency-penalty <value> |
Frequency penalty value | Provider default |
--log-level <level> |
Set logging level (ERROR, WARN, INFO, DEBUG, TRACE) | INFO |
--input , -i <input>
|
Specify input directly | None |
# Run in chat mode (interactive)
pnpm start -- --chat
# Set logging level
pnpm start -- --log-level DEBUG
# Use pipeline input
echo "Analyze product: Smart home assistant with voice control and AI learning capabilities" | pnpm start
The following example demonstrates how to build a concurrency workflow:
import { AIAgent, AIGNE, ProcessMode, TeamAgent } from "@aigne/core";
import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";
const { OPENAI_API_KEY } = process.env;
const model = new OpenAIChatModel({
apiKey: OPENAI_API_KEY,
});
const featureExtractor = AIAgent.from({
instructions: `\
You are a product analyst. Extract and summarize the key features of the product.
Product description:
{{product}}`,
outputKey: "features",
});
const audienceAnalyzer = AIAgent.from({
instructions: `\
You are a market researcher. Identify the target audience for the product.
Product description:
{{product}}`,
outputKey: "audience",
});
const aigne = new AIGNE({ model });
// 创建一个 TeamAgent 来处理并行工作流
const teamAgent = TeamAgent.from({
skills: [featureExtractor, audienceAnalyzer],
mode: ProcessMode.parallel,
});
const result = await aigne.invoke(teamAgent, {
product: "AIGNE is a No-code Generative AI Apps Engine",
});
console.log(result);
// Output:
// {
// features: "**Product Name:** AIGNE\n\n**Product Type:** No-code Generative AI Apps Engine\n\n...",
// audience: "**Small to Medium Enterprises (SMEs)**: \n - Businesses that may not have extensive IT resources or budget for app development but are looking to leverage AI to enhance their operations or customer engagement.\n\n...",
// }
This project is licensed under the MIT License.