revenium-middleware-openai-node
TypeScript icon, indicating that this package has built-in type declarations

1.0.7 • Public • Published

Revenium OpenAI Middleware for Node.js

A transparent Node.js middleware that automatically tracks OpenAI usage and sends metrics to Revenium for billing and analytics. Features seamless metadata integration with native TypeScript support - no type casting required! Works with both TypeScript and JavaScript projects.

Features

  • Seamless metadata integration - Native TypeScript support, no type casting required
  • Optional metadata - Track users, organizations, and other custom metadata (all metadata fields are optional)
  • Azure OpenAI support - Full support for Azure OpenAI with automatic detection
  • TypeScript & JavaScript support - Complete type safety for TS, works great with JS
  • Streaming support - Handles both regular and streaming OpenAI requests
  • Fire-and-forget tracking - Never blocks your application flow

Installation

npm install revenium-middleware-openai-node

Quick Start

1. Install the Package

npm install revenium-middleware-openai-node

2. Set Environment Variables

export REVENIUM_METERING_API_KEY=hak_your_revenium_api_key
# export REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter  # Optional: defaults to this URL
export OPENAI_API_KEY=sk_your_openai_api_key
export REVENIUM_DEBUG=true  # Optional: for debug logging

Or create a .env file in your project root:

# .env file
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key
# REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter  # Optional: defaults to this URL
OPENAI_API_KEY=sk_your_openai_api_key
REVENIUM_DEBUG=true

3. Use in Your Code

TypeScript/ES6 Modules

// Step 1: Import and initialize Revenium middleware
import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import OpenAI from 'openai';

// Step 2: Initialize middleware and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());

// Step 3: Use OpenAI exactly as you normally would
const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [
    { role: 'user', content: 'Hello, world!' }
  ]
});

console.log(response.choices[0].message.content);
// ✨ Usage automatically tracked to Revenium!

JavaScript/CommonJS

// Step 1: Import and initialize Revenium middleware
const { initializeReveniumFromEnv, patchOpenAIInstance } = require('revenium-middleware-openai-node');
const OpenAI = require('openai');

// Step 2: Initialize middleware and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());

// Step 3: Use OpenAI exactly as you normally would
const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [
    { role: 'user', content: 'Hello, world!' }
  ]
});

console.log(response.choices[0].message.content);
// ✨ Usage automatically tracked to Revenium!

Examples

Want to see it in action? Check out the complete working examples:

  • OpenAI Basic - Chat completions and embeddings with optional metadata
  • OpenAI Streaming - Streaming responses and batch embeddings with optional metadata
  • Azure Basic - Azure OpenAI chat completions and embeddings with optional metadata
  • Azure Streaming - Azure OpenAI streaming and batch embeddings with optional metadata

Adding Custom Metadata

Track users, organizations, and custom data with seamless TypeScript integration:

import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import OpenAI from 'openai';

// Initialize and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());

const response = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [
    { role: 'user', content: 'Summarize this document' }
  ],
  // ✨ Add custom tracking metadata - all fields optional, no type casting needed!
  usageMetadata: {
    subscriber: {
      id: 'user-12345',
      email: 'john@acme-corp.com'
    },
    organizationId: 'acme-corp',
    productId: 'document-ai',
    taskType: 'document-summary',
    agent: 'doc-summarizer-v2',
    traceId: 'session-abc123'
  }
});

Streaming Support

The middleware automatically handles streaming requests with seamless metadata:

import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import OpenAI from 'openai';

// Initialize and patch OpenAI instance
initializeReveniumFromEnv();
const openai = patchOpenAIInstance(new OpenAI());

const stream = await openai.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
  // ✨ Metadata works seamlessly with streaming - all fields optional!
  usageMetadata: {
    organizationId: 'story-app',
    taskType: 'creative-writing'
  }
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Usage tracking happens automatically when stream completes

Temporarily Disabling Tracking

If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:

import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';

// Disable tracking
unpatchOpenAI();

// Your OpenAI calls now bypass Revenium tracking
await openai.chat.completions.create({...});

// Re-enable tracking
patchOpenAI();

Common use cases:

  • Debugging: Isolate whether issues are caused by the middleware
  • Testing: Compare behavior with/without tracking
  • Conditional tracking: Enable/disable based on environment
  • Troubleshooting: Temporary bypass during incident response

Note: This affects all OpenAI instances globally since we patch the prototype methods.

Azure OpenAI Integration

Azure OpenAI support The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.

Quick Start with Azure OpenAI

# Set your Azure OpenAI environment variables
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_DEPLOYMENT="gpt-4o"  # Your deployment name
export AZURE_OPENAI_API_VERSION="2024-12-01-preview"  # Optional, defaults to latest

# Set your Revenium credentials
export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
# export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter"  # Optional: defaults to this URL
import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
import { AzureOpenAI } from 'openai';

// Initialize Revenium middleware
initializeReveniumFromEnv();

// Create and patch Azure OpenAI client
const azure = patchOpenAIInstance(new AzureOpenAI({
  endpoint: process.env.AZURE_OPENAI_ENDPOINT,
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  apiVersion: process.env.AZURE_OPENAI_API_VERSION,
}));

// Your existing Azure OpenAI code works with seamless metadata
const response = await azure.chat.completions.create({
  model: 'gpt-4o',  // Uses your deployment name
  messages: [{ role: 'user', content: 'Hello from Azure!' }],
  // ✨ Optional metadata with native TypeScript support
  usageMetadata: {
    organizationId: 'my-company',
    taskType: 'azure-chat'
  }
});

console.log(response.choices[0].message.content);

Azure Features

  • Automatic Detection: Detects Azure OpenAI clients automatically
  • Model Name Resolution: Maps Azure deployment names to standard model names for accurate pricing
  • Provider Metadata: Correctly tags requests with provider: "Azure" and modelSource: "OPENAI"
  • Deployment Support: Works with any Azure deployment name (simple or complex)
  • Endpoint Flexibility: Supports all Azure OpenAI endpoint formats
  • Zero Code Changes: Existing Azure OpenAI code works without modification

Azure Environment Variables

Variable Required Description Example
AZURE_OPENAI_ENDPOINT Yes Your Azure OpenAI endpoint URL https://acme.openai.azure.com/
AZURE_OPENAI_API_KEY Yes Your Azure OpenAI API key abc123...
AZURE_OPENAI_DEPLOYMENT No Default deployment name gpt-4o or text-embedding-3-large
AZURE_OPENAI_API_VERSION No API version (defaults to 2024-12-01-preview) 2024-12-01-preview

Azure Model Name Resolution

The middleware automatically maps Azure deployment names to standard model names for accurate pricing:

// Azure deployment names → Standard model names for pricing
"gpt-4o-2024-11-20"      "gpt-4o"
"gpt4o-prod"             "gpt-4o"
"o4-mini"                "gpt-4o-mini"
"gpt-35-turbo-dev"       "gpt-3.5-turbo"
"text-embedding-3-large"  "text-embedding-3-large"  // Direct match
"embedding-3-large"      "text-embedding-3-large"

Configuration

Environment Variables

Variable Required Description
REVENIUM_METERING_API_KEY Yes Your Revenium API key (starts with hak_)
REVENIUM_METERING_BASE_URL No Revenium API base URL (defaults to https://api.revenium.io/meter)
OPENAI_API_KEY No OpenAI API key (if not set in OpenAI client)
REVENIUM_DEBUG No Set to true for debug logging

Usage Metadata Options

All metadata fields are optional and help provide better analytics:

interface UsageMetadata {
  traceId?: string;                    // Session or conversation ID
  taskType?: string;                   // Type of AI task (e.g., "chat", "summary")
  subscriber?: {                       // User information (nested structure)
    id?: string;                       // User ID from your system
    email?: string;                    // User's email address
    credential?: {                     // User credentials
      name?: string;                   // Credential name
      value?: string;                  // Credential value
    };
  };
  organizationId?: string;             // Organization/company ID
  subscriptionId?: string;             // Billing plan ID
  productId?: string;                  // Your product/feature ID
  agent?: string;                      // AI agent identifier
  responseQualityScore?: number;       // Quality score (0-1)
}

How It Works

  1. Automatic Patching: When imported, the middleware patches OpenAI's chat.completions.create method
  2. Request Interception: All OpenAI requests are intercepted to extract metadata
  3. Usage Extraction: Token counts, model info, and timing data are captured
  4. Async Tracking: Usage data is sent to Revenium in the background (fire-and-forget)
  5. Transparent Response: Original OpenAI responses are returned unchanged

The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.

Troubleshooting

Not seeing tracking data in your dashboard? Check that you see these logs when making OpenAI calls:

export REVENIUM_DEBUG=true
# Expected output:
[Revenium Debug] OpenAI chat.completions.create intercepted
[Revenium Debug] Revenium tracking successful

If you don't see these logs, or encounter other issues, see our Detailed Troubleshooting Guide for framework-specific solutions and common integration problems.

Requirements

  • Node.js 16+
  • OpenAI package v4.0+
  • TypeScript 5.0+ (for TypeScript projects)

Contributing

For issues or feature requests, please contact the Revenium team.

/revenium-middleware-openai-node/

    Package Sidebar

    Install

    npm i revenium-middleware-openai-node

    Weekly Downloads

    77

    Version

    1.0.7

    License

    MIT

    Unpacked Size

    246 kB

    Total Files

    115

    Last publish

    Collaborators

    • jason-revenium