RAG Chatbot is a customizable chatbot package powered by OpenAI's Langchain vector embeddings. It enables developers to easily integrate a sophisticated conversational AI Assistant in the backend into their applications.
- Customizable: Tailor the chatbot's responses and behaviour to suit your application's needs by providing it with your knowledge base in Docx or Txt format.
- Powered by Langchain Vector Embeddings: Benefit from state-of-the-art language understanding and generation capabilities.
- Easy Integration: Simple npm installation and usage.
- Scalable: Designed to handle varying levels of user interactions efficiently.
You can install RAG Chatbot via npm:
npm install rag-chatbot
To use RAG Chatbot in your project, follow these steps:
- Import the module setupDataAndVectorStoreOnce for creating a vector store (Do this whenever your knowledge base is modified):
- Once you create your vector store, initialize the module userQueryWithConversationChain and integrate it in your chatbot and pass the user's query and openaikey for it to answer user's query from it's knowledge base automatically.
- You can customize the userQueryWithConversationChain module for more accuracy according to your vector store and use case.
const setupDataAndVectorStoreOnce = require('rag-chatbot');
const filePaths = ['./demo.txt','./demo.docx'];
setupDataAndVectorStoreOnce(filePaths,openAiKey);
const userQueryWithConversationChain = require('rag-chatbot');
const result = userQueryWithConversationChain(query, openAIApiKey);
console.log(result.response, result.metadata);
const userQueryWithConversationChain = require('rag-chatbot');
const result = userQueryWithConversationChain(query, openAIApiKey, model, temperature, topK, prompt, maxMemoryToken);
console.log(result.response, result.metadata);
Here are the available configuration options:
- apiKey: Your OpenAI API key (required).
- model: The model to use for generating responses (default: "gpt-4").
- temperature: Sampling temperature for response generation (default: 0.7).
- topK: Maximum number of tokens to generate for each response (default: 3).
- prompt: The prompt to provide a customize prompt to llm.
- maxMemroyToken: Maximum number of tokens to keep the contextual memory of the llm (default: 10).
This project is licensed under the MIT License - see the LICENSE file for details.