A transparent Node.js middleware that automatically tracks OpenAI usage and sends metrics to Revenium for billing and analytics. Features seamless metadata integration with native TypeScript support - no type casting required! Works with both TypeScript and JavaScript projects.
- ✅ Seamless metadata integration - Native TypeScript support, no type casting required
- ✅ Optional metadata - Track users, organizations, and other custom metadata (all metadata fields are optional)
- ✅ Azure OpenAI support - Full support for Azure OpenAI with automatic detection
- ✅ TypeScript & JavaScript support - Complete type safety for TS, works great with JS
- ✅ Streaming support - Handles both regular and streaming OpenAI requests
- ✅ Fire-and-forget tracking - Never blocks your application flow
npm install revenium-middleware-openai-node
npm install revenium-middleware-openai-node
export REVENIUM_METERING_API_KEY=hak_your_revenium_api_key
# export REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter # Optional: defaults to this URL
export OPENAI_API_KEY=sk_your_openai_api_key
export REVENIUM_DEBUG=true # Optional: for debug logging
Or create a .env
file in your project root:
# .env file
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key
# REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter # Optional: defaults to this URL
OPENAI_API_KEY=sk_your_openai_api_key
REVENIUM_DEBUG=true
// Step 1: Import and initialize Revenium middleware
import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import OpenAI from 'openai';
// Step 2: Initialize middleware and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());
// Step 3: Use OpenAI exactly as you normally would
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
});
console.log(response.choices[0].message.content);
// ✨ Usage automatically tracked to Revenium!
// Step 1: Import and initialize Revenium middleware
const { initializeReveniumFromEnv, patchOpenAIInstance } = require('revenium-middleware-openai-node');
const OpenAI = require('openai');
// Step 2: Initialize middleware and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());
// Step 3: Use OpenAI exactly as you normally would
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
});
console.log(response.choices[0].message.content);
// ✨ Usage automatically tracked to Revenium!
Want to see it in action? Check out the complete working examples:
- OpenAI Basic - Chat completions and embeddings with optional metadata
- OpenAI Streaming - Streaming responses and batch embeddings with optional metadata
- Azure Basic - Azure OpenAI chat completions and embeddings with optional metadata
- Azure Streaming - Azure OpenAI streaming and batch embeddings with optional metadata
Track users, organizations, and custom data with seamless TypeScript integration:
import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import OpenAI from 'openai';
// Initialize and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Summarize this document' }
],
// ✨ Add custom tracking metadata - all fields optional, no type casting needed!
usageMetadata: {
subscriber: {
id: 'user-12345',
email: 'john@acme-corp.com'
},
organizationId: 'acme-corp',
productId: 'document-ai',
taskType: 'document-summary',
agent: 'doc-summarizer-v2',
traceId: 'session-abc123'
}
});
The middleware automatically handles streaming requests with seamless metadata:
import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import OpenAI from 'openai';
// Initialize and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
// ✨ Metadata works seamlessly with streaming - all fields optional!
usageMetadata: {
organizationId: 'story-app',
taskType: 'creative-writing'
}
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Usage tracking happens automatically when stream completes
If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
// Disable tracking
unpatchOpenAI();
// Your OpenAI calls now bypass Revenium tracking
await openai.chat.completions.create({...});
// Re-enable tracking
patchOpenAI();
Common use cases:
- Debugging: Isolate whether issues are caused by the middleware
- Testing: Compare behavior with/without tracking
- Conditional tracking: Enable/disable based on environment
- Troubleshooting: Temporary bypass during incident response
Note: This affects all OpenAI instances globally since we patch the prototype methods.
Azure OpenAI support The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
# Set your Azure OpenAI environment variables
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
# Set your Revenium credentials
export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
# export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import { AzureOpenAI } from 'openai';
// Initialize Revenium middleware
initializeReveniumFromEnv();
// Create and patch Azure OpenAI client
const azure = patchOpenAIInstance(new AzureOpenAI({
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
apiKey: process.env.AZURE_OPENAI_API_KEY,
apiVersion: process.env.AZURE_OPENAI_API_VERSION,
}));
// Your existing Azure OpenAI code works with seamless metadata
const response = await azure.chat.completions.create({
model: 'gpt-4o', // Uses your deployment name
messages: [{ role: 'user', content: 'Hello from Azure!' }],
// ✨ Optional metadata with native TypeScript support
usageMetadata: {
organizationId: 'my-company',
taskType: 'azure-chat'
}
});
console.log(response.choices[0].message.content);
- ✅ Automatic Detection: Detects Azure OpenAI clients automatically
- ✅ Model Name Resolution: Maps Azure deployment names to standard model names for accurate pricing
- ✅ Provider Metadata: Correctly tags requests with
provider: "Azure"
andmodelSource: "OPENAI"
- ✅ Deployment Support: Works with any Azure deployment name (simple or complex)
- ✅ Endpoint Flexibility: Supports all Azure OpenAI endpoint formats
- ✅ Zero Code Changes: Existing Azure OpenAI code works without modification
Variable | Required | Description | Example |
---|---|---|---|
AZURE_OPENAI_ENDPOINT |
Yes | Your Azure OpenAI endpoint URL | https://acme.openai.azure.com/ |
AZURE_OPENAI_API_KEY |
Yes | Your Azure OpenAI API key | abc123... |
AZURE_OPENAI_DEPLOYMENT |
No | Default deployment name |
gpt-4o or text-embedding-3-large
|
AZURE_OPENAI_API_VERSION |
No | API version (defaults to 2024-12-01-preview ) |
2024-12-01-preview |
The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
// Azure deployment names → Standard model names for pricing
"gpt-4o-2024-11-20" → "gpt-4o"
"gpt4o-prod" → "gpt-4o"
"o4-mini" → "gpt-4o-mini"
"gpt-35-turbo-dev" → "gpt-3.5-turbo"
"text-embedding-3-large" → "text-embedding-3-large" // Direct match
"embedding-3-large" → "text-embedding-3-large"
Variable | Required | Description |
---|---|---|
REVENIUM_METERING_API_KEY |
Yes | Your Revenium API key (starts with hak_ ) |
REVENIUM_METERING_BASE_URL |
No | Revenium API base URL (defaults to https://api.revenium.io/meter ) |
OPENAI_API_KEY |
No | OpenAI API key (if not set in OpenAI client) |
REVENIUM_DEBUG |
No | Set to true for debug logging |
All metadata fields are optional and help provide better analytics:
interface UsageMetadata {
traceId?: string; // Session or conversation ID
taskType?: string; // Type of AI task (e.g., "chat", "summary")
subscriber?: { // User information (nested structure)
id?: string; // User ID from your system
email?: string; // User's email address
credential?: { // User credentials
name?: string; // Credential name
value?: string; // Credential value
};
};
organizationId?: string; // Organization/company ID
subscriptionId?: string; // Billing plan ID
productId?: string; // Your product/feature ID
agent?: string; // AI agent identifier
responseQualityScore?: number; // Quality score (0-1)
}
-
Automatic Patching: When imported, the middleware patches OpenAI's
chat.completions.create
method - Request Interception: All OpenAI requests are intercepted to extract metadata
- Usage Extraction: Token counts, model info, and timing data are captured
- Async Tracking: Usage data is sent to Revenium in the background (fire-and-forget)
- Transparent Response: Original OpenAI responses are returned unchanged
The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
Not seeing tracking data in your dashboard? Check that you see these logs when making OpenAI calls:
export REVENIUM_DEBUG=true
# Expected output:
[Revenium Debug] OpenAI chat.completions.create intercepted
[Revenium Debug] Revenium tracking successful
If you don't see these logs, or encounter other issues, see our Detailed Troubleshooting Guide for framework-specific solutions and common integration problems.
- Node.js 16+
- OpenAI package v4.0+
- TypeScript 5.0+ (for TypeScript projects)
For issues or feature requests, please contact the Revenium team.