- Introduction
- Features
- Installation
- Configuration
- Usage
- Advanced Features
- Command Reference
- Examples
- Troubleshooting
- Contributing
- License
- Acknowledgements
Welcome to QLLM CLI, a powerful command-line interface for seamless interaction with Large Language Models (LLMs). QLLM CLI provides a unified platform that supports multiple providers and empowers users with extensive configuration options and features.
Key Highlights:
- Multi-provider support through qllm-lib integration
- Rich, interactive chat experiences with conversation management
- Efficient one-time question answering
- Advanced image input capabilities for visual analysis
- Fine-grained control over model parameters
- Comprehensive configuration system
QLLM CLI offers a robust set of features designed for effective AI interaction:
-
🌐 Multi-provider Support: Seamlessly switch between LLM providers through qllm-lib integration.
-
💬 Interactive Chat Sessions:
- Context-aware conversations with history management
- Real-time streaming responses
- System message customization
-
❓ One-time Question Answering: Quick answers for standalone queries with the
ask
command. -
🖼️ Image Input Support: Analyze images from multiple sources:
- Local files (Supported formats: jpg, jpeg, png, gif, bmp, webp)
- URLs pointing to online images
- Clipboard images
- Screen captures with display selection
-
🎛️ Model Parameters: Fine-tune AI behavior with:
- Temperature (0.0 to 1.0)
- Max tokens
- Top P
- Frequency penalty
- Presence penalty
- Stop sequences
-
📋 Provider Management:
- List available providers
- View supported models per provider
- Configure default provider and model
-
🔄 Response Handling:
- Stream responses in real-time
- Save responses to files
- Extract specific variables from responses
-
⚙️ Configuration System:
- Interactive configuration setup
- JSON-based configuration storage
- Environment variable support
To use QLLM CLI, ensure you have Node.js installed on your system. Then install globally via npm:
npm install -g qllm
Verify the installation:
qllm --version
QLLM CLI provides flexible configuration management through both interactive and command-line interfaces.
Run the interactive configuration wizard:
qllm configure
The wizard guides you through configuring:
-
Provider Settings
- Default Provider
- Default Model
-
Model Parameters
- Temperature (0.0 to 1.0)
- Max Tokens
- Top P
- Frequency Penalty
- Presence Penalty
- Stop Sequences
-
Other Settings
- Log Level
- Custom Prompt Directory
Set individual configuration values:
qllm configure --set <key=value>
View current configuration:
qllm configure --list
Get a specific setting:
qllm configure --get <key>
Settings are stored in ~/.qllmrc
as JSON. While manual editing is possible, using the configure
commands is recommended.
QLLM CLI supports three main interaction modes:
- Direct Questions
qllm ask "What is the capital of France?"
- Interactive Chat
qllm chat
- Template-based Execution
qllm run template.yaml
Include images in your queries:
# Local file
qllm ask "What's in this image?" -i path/to/image.jpg
# URL
qllm ask "Describe this image" -i https://example.com/image.jpg
# Clipboard
qllm ask "Analyze this image" --use-clipboard
# Screenshot
qllm ask "What's on my screen?" --screenshot 1
Control output behavior:
# Save to file
qllm ask "Query" -o output.txt
# Disable streaming
qllm ask "Query" --no-stream
# Add system message
qllm ask "Query" --system-message "You are a helpful assistant"
QLLM CLI supports running predefined templates:
qllm run template.yaml
Template options:
-
-v, --variables
: Provide template variables in JSON format -
-ns, --no-stream
: Disable response streaming -
-o, --output
: Save response to file -
-e, --extract
: Extract specific variables from response
In chat mode, use these commands:
-
/help
: Show available commands -
/new
: Start new conversation -
/save
: Save conversation -
/load
: Load conversation -
/list
: Show conversation history -
/clear
: Clear conversation -
/models
: List available models -
/providers
: List providers -
/options
: Show chat options -
/set <option> <value>
: Set chat option -
/image <path>
: Add image -
/clearimages
: Clear image buffer -
/listimages
: List images in buffer
List available providers:
qllm list providers
List models for a provider:
qllm list models <provider>
Options:
-
-f, --full
: Show full model details -
-s, --sort <field>
: Sort by field (id, created) -
-r, --reverse
: Reverse sort order -
-c, --columns
: Select display columns
Configure providers using environment variables:
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
Use QLLM with piped input:
echo "Explain quantum computing" | qllm ask
cat article.txt | qllm ask "Summarize this:"
qllm [template] # Run a template or start ask mode if no template
qllm ask [question] # Ask a one-time question
qllm chat # Start interactive chat session
qllm configure # Configure settings
qllm list # List providers or models
-p, --provider <provider> # LLM provider to use
-m, --model <model> # Specific model to use
--max-tokens <number> # Maximum tokens to generate
--temperature <number> # Temperature for generation (0-1)
--log-level <level> # Set log level (error, warn, info, debug)
-i, --image <path> # Include image file or URL (multiple allowed)
--use-clipboard # Use image from clipboard
--screenshot <number> # Capture screenshot from display
-ns, --no-stream # Disable response streaming
-o, --output <file> # Save response to file
-s, --system-message # Set system message
-l, --list # List all settings
-s, --set <key=value> # Set a configuration value
-g, --get <key> # Get a configuration value
list providers # List available providers
list models <provider> # List models for provider
-f, --full # Show full model details
-s, --sort <field> # Sort by field
-r, --reverse # Reverse sort order
-c, --columns # Select columns to display
-t, --type <type> # Template source type (file, url, inline)
-v, --variables <json> # Template variables in JSON format
-e, --extract <vars> # Variables to extract from response
- Simple Questions
# Direct question
qllm ask "What is quantum computing?"
# With system message
qllm ask "Explain like I'm 5: What is gravity?" --system-message "You are a teacher for young children"
- Interactive Chat
# Start chat with default settings
qllm chat
# Start chat with specific provider and model
qllm chat -p openai -m gpt-4
- Local Image Analysis
# Analyze a single image
qllm ask "What's in this image?" -i photo.jpg
# Compare multiple images
qllm ask "What are the differences?" -i image1.jpg -i image2.jpg
- Screen Analysis
# Capture and analyze screen
qllm ask "What's on my screen?" --screenshot 1
# Use clipboard image
qllm ask "Analyze this diagram" --use-clipboard
- Template Usage
# Run template with variables
qllm run template.yaml -v '{"name": "John", "age": 30}'
# Extract specific variables
qllm run analysis.yaml -e "summary,key_points"
- Output Control
# Save to file
qllm ask "Write a story about AI" -o story.txt
# Disable streaming for batch processing
qllm ask "Generate a report" --no-stream
- Provider Management
# List available providers
qllm list providers
# View models for specific provider
qllm list models openai -f
- Setting Preferences
# Set default provider
qllm configure --set provider=openai
# Set default model
qllm configure --set model=gpt-4
- Viewing Settings
# View all settings
qllm configure --list
# Check specific setting
qllm configure --get model
# Pipe text for analysis
cat document.txt | qllm ask "Summarize this text"
# Process command output
ls -l | qllm ask "Explain these file permissions"
-
Configuration Issues
- Check your configuration:
qllm configure --list
- Verify API keys are set correctly in environment variables
- Ensure provider and model selections are valid
- Check your configuration:
-
Provider Errors
- Verify provider availability:
qllm list providers
- Check model compatibility:
qllm list models <provider>
- Ensure API key is valid for the selected provider
- Verify provider availability:
-
Image Input Problems
- Verify supported formats: jpg, jpeg, png, gif, bmp, webp
- Check file permissions and paths
- For clipboard issues, ensure image is properly copied
- For screenshots, verify display number is correct
-
Network Issues
- Check internet connection
- Verify no firewall blocking
- Try with --no-stream option to rule out streaming issues
Common error messages and solutions:
-
"Invalid provider"
- Use
qllm list providers
to see available providers - Set valid provider:
qllm configure --set provider=<provider>
- Use
-
"Invalid model"
- Check available models:
qllm list models <provider>
- Set valid model:
qllm configure --set model=<model>
- Check available models:
-
"Configuration error"
- Reset configuration: Remove ~/.qllmrc
- Reconfigure:
qllm configure
-
"API key not found"
- Set required environment variables
- Verify API key format and validity
-
Version Issues
- Check current version:
qllm --version
- Update to latest:
npm update -g qllm
- Check current version:
-
Installation Problems
- Verify Node.js version (14+)
- Try with sudo if permission errors:
sudo npm install -g qllm
- Clear npm cache if needed:
npm cache clean --force
If issues persist:
- Check the GitHub Issues
- Use
qllm <command> --help
for command-specific help - Run with debug logging:
qllm --log-level debug <command>
We welcome contributions to QLLM CLI! Here's how you can help:
- Fork and clone the repository:
git clone https://github.com/your-username/qllm.git
cd qllm
- Install dependencies:
npm install
- Create a feature branch:
git checkout -b feature/your-feature-name
-
Code Style
- Follow existing code style
- Use TypeScript for type safety
- Add JSDoc comments for public APIs
- Keep functions focused and modular
-
Testing
- Add tests for new features
- Ensure existing tests pass:
npm test
- Include both unit and integration tests
-
Documentation
- Update README.md for new features
- Add JSDoc comments
- Include examples in documentation
- Keep documentation synchronized with code
- Commit your changes:
git add .
git commit -m "feat: description of your changes"
- Push to your fork:
git push origin feature/your-feature-name
- Create a Pull Request:
- Provide clear description of changes
- Reference any related issues
- Include test results
- List any breaking changes
- Maintainers will review your PR
- Address any requested changes
- Once approved, changes will be merged
- Your contribution will be acknowledged
QLLM CLI is licensed under the Apache License, Version 2.0.
Copyright 2023 Quantalogic
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
QLLM CLI is made possible thanks to:
- The open-source community
- Contributors and maintainers
- LLM providers and their APIs
- Node.js and npm ecosystem
Special thanks to all who have contributed to making this project better!