A Node.js wrapper for logging LLM traces directly to Helicone, bypassing the proxy, with OpenLLMetry. This package enables you to monitor and analyze your OpenAI API usage without requiring a proxy server.
- Direct logging to Helicone without proxy
- OpenLLMetry support for standardized LLM telemetry
- Custom property tracking
- Environment variable configuration
- TypeScript support
npm install @helicone/async
-
Create a Helicone account and get your API key from helicone.ai/developer
-
Set up your environment variables:
export HELICONE_API_KEY=<your API key>
export OPENAI_API_KEY=<your OpenAI API key>
- Basic usage:
const { HeliconeAsyncOpenAI } = require("helicone");
const openai = new HeliconeAsyncOpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
},
});
const chatCompletion = await openai.chat.completion.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello world" }],
});
console.log(chatCompletion.data.choices[0].message);
The heliconeMeta
object supports several configuration options:
interface HeliconeMeta {
apiKey?: string; // Your Helicone API key
custom_properties?: Record<string, any>; // Custom properties to track
cache?: boolean; // Enable/disable caching
retry?: boolean; // Enable/disable retries
user_id?: string; // Track requests by user
}
const openai = new HeliconeAsyncOpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
custom_properties: {
project: "my-project",
environment: "production",
},
user_id: "user-123",
},
});
async function generateResponse() {
try {
const response = await openai.chat.completion.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" },
],
max_tokens: 150,
});
return response.data.choices[0].message;
} catch (error) {
console.error("Error:", error);
}
}
try {
const completion = await openai.chat.completion.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello" }],
});
} catch (error) {
if (error.response) {
console.error(error.response.status);
console.error(error.response.data);
} else {
console.error(error.message);
}
}
- Always store API keys in environment variables
- Implement proper error handling
- Use custom properties to track important metadata
- Set appropriate timeout values for your use case
- Consider implementing retry logic for production environments
We welcome contributions! Please see our contributing guidelines for details.
Apache-2.0
- Documentation: https://docs.helicone.ai
- Issues: GitHub Issues
- Discord: Join our community