A TypeScript/JavaScript module for implementing Retrieval-Augmented Generation (RAG) using Qdrant vector database, Google's Generative AI embeddings, and Groq LLM.
Check out the Github Repository Link for this module here.
- Query classification using LLM
- Vector storage and retrieval using Qdrant
- Text embeddings using Google's Generative AI
- Customizable prompts for classification and responses
- Automatic text chunking and vector store creation
- TypeScript support with full type definitions
npm install rag-module @qdrant/js-client-rest @langchain/google-genai groq-sdk langchain
You'll need the following API keys:
- Google API key for embeddings, Get it from here
- Qdrant API key and URL for vector storage, Get it from your Database's Clusters Api Keys section.
- Groq API key for LLM capabilities, Get it from here
import { createRAG, RAGConfig } from 'rag-module';
const config: RAGConfig = {
googleApiKey: 'your-google-api-key',
qdrantUrl: 'your-qdrant-url',
qdrantApiKey: 'your-qdrant-api-key',
groqApiKey: 'your-groq-api-key',
collectionName: 'custom-collection',
fallbackResponse: 'Sorry, I could not find relevant information.'
};
const rag = createRAG(config);
async function main() {
try {
const result = await rag.processQuery(
"Your context text here...",
"Your query here..."
);
console.log('Response:', result.response);
console.log('Category:', result.category);
} catch (error) {
console.error('Error:', error);
}
}
You can customize the classification and response prompts:
const config: RAGConfig = {
// ... other config options
prompts: {
classification: `Analyze this query and categorize it as either 'technical', 'historical',
or 'general': {query}. Reply with just one word.`,
response: `Based on this context: '{context}', please answer this question: '{query}'.
Include relevant quotes when appropriate.`
}
};
Option | Type | Required | Default | Description |
---|---|---|---|---|
googleApiKey | string | Yes | - | Google API key for embeddings |
qdrantUrl | string | Yes | - | Qdrant server URL |
qdrantApiKey | string | Yes | - | Qdrant API key |
groqApiKey | string | Yes | - | Groq API key |
collectionName | string | No | 'default_collection' | Name of the Qdrant collection |
fallbackResponse | string | No | 'I could not find relevant information.' | Response when no relevant information is found |
prompts | object | No | See below | Custom prompts for classification and response |
{
classification: `I am providing you a query, based on the query your work is detect whether
that is related to marks, events or general information. Query: {query}`,
response: `You are a helpful assistant. Based on this context: "{context}",
please answer this question: "{query}"`
}
Creates a new RAG implementation instance with the provided configuration.
Processes a query against the provided context and returns a response with classification.
Returns:
interface RAGResponse {
response: string; // The generated response
category: string; // The classified category
}
Updates the classification and response prompts after initialization.
The module throws errors for:
- Missing required configuration parameters
- Vector store creation failures
- Query processing errors
- Collection creation/management issues
The module includes TypeScript definitions for all exports. Import types as needed:
import { RAGConfig, RAGResponse } from 'rag-module';
const config: RAGConfig = {
// ... other config options
prompts: {
classification: `Categorize as 'technical', 'historical', or 'general': {query}`,
response: `Based on this context: '{context}', answer: '{query}'`
}
};
const rag = createRAG(config);
const result = await rag.processQuery(
"Technical documentation about TypeScript...",
"What are TypeScript interfaces?"
);
// result.category will be 'technical'