This is a custom node for n8n that allows you to store and retrieve messages in a round-robin fashion, particularly useful for LLM conversation loops with multiple personas.
- Store messages from different roles/personas in a stateful way during workflow execution
- Enhanced persona profiles with tone, expertise areas, colors, and system prompts
- Round counter tracks conversation turns and allows setting limits
- Reliable binary storage for persistence between workflow executions
- LLM platform integrations with formatting for OpenAI, Anthropic Claude, and Google Gemini
- System prompt management for guiding AI behavior
- Retrieve messages in various formats (array, object by role, or conversation history for LLMs)
- Simplify outputs to create clean, minimal data structures for AI models
- LLM-ready defaults pre-configured for ChatGPT-style conversations
- Clear stored messages when needed
- Clone this repository:
git clone https://github.com/JamesFincher/n8n-nodes-roundrobin.git
- Navigate to the project directory:
cd n8n-nodes-roundrobin
- Install dependencies:
npm install
- Build the project:
npm run build
- Link the package to your n8n installation:
npm link
- In your n8n installation directory, link this package:
cd YOUR_N8N_INSTALLATION_DIRECTORY
npm link n8n-nodes-roundrobin
To install the node globally, run:
npm install -g n8n-nodes-roundrobin@0.10.0
If you're using Docker to run n8n, you can include this custom node by:
- Creating a custom Dockerfile:
FROM n8nio/n8n
RUN npm install -g n8n-nodes-roundrobin@0.10.0
- Building your custom image:
docker build -t n8n-with-roundrobin .
- Running n8n with your custom image:
docker run -it --rm \
--name n8n \
-p 5678:5678 \
n8n-with-roundrobin
-
Initial Setup: Start by configuring the Round Robin node in "Store" mode.
- Set the number of spots (default: 3 for User, Assistant, System)
- Define the roles for each spot with enhanced persona details
- Configure colors, tones, expertise areas, and system prompts
-
Storing Messages: For each message in your workflow:
- Configure the Round Robin node to "Store" mode
- Specify which role's spot to store the message (e.g., spot index 0 for "User")
- Set the input field name that contains the message (default: "output")
-
Retrieving Messages: When you need to retrieve the conversation:
- Configure the Round Robin node to "Retrieve" mode
- Choose the output format (array, object, or conversation history)
- Select the target LLM platform (OpenAI, Claude, etc.)
- Configure system prompt options
- Enable "Simplify Output" for clean, minimal data
-
Binary Data Flow: For storage to persist between executions:
- Connect the binary output of one Round Robin node to the input of the next
- All conversation data is stored in the binary property (default: "data")
Here's a typical workflow for managing multi-persona conversations with LLMs:
- Initialize with the Round Robin in "Clear" mode to start fresh
- Store Initial Message from User (spot index 0)
-
Loop:
- Retrieve the conversation history (format: conversationHistory, platform: openai)
- Send to LLM for Assistant response
- Store the Assistant response (spot index 1)
- Retrieve updated conversation history
- Process or display the results
- Store the next user input (spot index 0)
- Continue the loop
- Number of Spots: Define how many distinct roles/personas you have (default: 3)
-
Roles: Configure enhanced persona details:
- Name: Name of the role/persona
- Description: Description of the role/persona
- Color: Visual color indicator for the role
- System Prompt Template: Role-specific system instructions
- Enabled: Whether this role should be included in conversations
- Input Message Field: Field name containing the message to store (default: "output")
- Spot Index: Which spot to store the message in (0-based index)
- Binary Input Property: Name of the binary property containing previous conversation data
- Binary Input Property: Name of the binary property containing the conversation data
-
Output Format:
-
array
: Returns all messages as an array -
object
: Groups messages by role name -
conversationHistory
: Formats for LLM input (default)
-
-
LLM Platform: Format specifically for different AI platforms
- OpenAI (ChatGPT): Standard OpenAI format with user/assistant/system roles
- Anthropic (Claude): Claude-specific format with Human/Assistant markers
- Google (Gemini): Google's conversation format
- Generic: Generic format compatible with most LLMs
-
System Prompt Options:
- Include System Prompt: Whether to include system instructions
- System Prompt Position: Place at start or end of conversation
- Simplify Output: Returns clean, minimal data structures (default: true)
- Maximum Messages: Limit the number of messages returned (0 = all)
- Resets all stored messages while preserving role configurations
- Binary Input Property: Name of the binary property containing previous conversation data (optional)
- Binary storage provides reliable persistence between workflow executions
- For workflows where multiple conversations need to be maintained separately, use the Conversation ID parameter
- Data corruption is handled gracefully with auto-recovery to prevent workflow failures
- Removed legacy static storage in favor of binary-only storage
- Enhanced binary storage with better error handling and corrupt data recovery
- Improved JSON validation for all binary data operations
- Added automatic role fallback when conversation data is incomplete
- Fixed timestamp handling throughout storage and retrieval operations
- Enhanced error reporting with specific error messages and status codes
- Improved round counter implementation with better error handling
- Enhanced user interface with more intuitive parameter organization
- Added round count information to all output formats
- Further improved storage reliability across workflow executions
- Added conversation round counter to track and limit conversation turns
- Made binary storage the default for better persistence across executions
- Improved UI with better organization, hints, and notices
- Enhanced role configuration interface
- Fixed TypeScript configuration for better development experience
- Added ESLint for code quality assurance
- Improved project structure following n8n community node standards
- Enhanced gitignore patterns for cleaner development
- Fixed type inference issues in storage management
- Complete refactoring of the storage system for maximum reliability
- Added proper static data handling with namespace isolation for multiple nodes
- Split monolithic code into modular functions for better maintainability
- Improved type safety and error handling throughout the codebase
- Removed redundant serialization for better performance
- Fixed critical issue with data persistence between node executions
- Improved static data storage by using unique node identifiers
- Fixed type handling for expertise fields
- Optimized data structures to ensure reliable storage
- Added rich persona profiles with tone, expertise, and color
- Implemented platform-specific formatting for OpenAI, Claude, and Gemini
- Added system prompt management with positioning options
- Enhanced filtering of messages by role enablement
- Fixed critical issue with data persistence between workflow executions
- Implemented robust serialization to ensure messages and roles are properly stored
- Added more thorough type checking and initialization of static data
- Improved error handling for serialization/deserialization
- Fixed storage implementation for reliable persistence
- Added "Simplify Output" option for cleaner data
- Implemented better defaults for LLM use cases
- Improved error handling and debugging
- Added helpful notices and documentation
James Fincher