A WeWeb backend integration for OpenAI's API, providing access to AI models for chat completions, embeddings, content moderation, and image generation within WeWeb backend workflows.
- Simple integration with OpenAI API
- Support for Chat Completions (GPT-3.5/GPT-4)
- Text embeddings generation
- Content moderation
- Image generation with DALL-E
This package is designed to work with the WeWeb Supabase Backend Builder and Deno.
import { serve } from '@weweb/backend-core';
import { createOpenAIIntegration } from '@weweb/backend-openai';
import type { BackendConfig } from '@weweb/backend-core';
import { serve } from '@weweb/backend-core';
import OpenAI from '@weweb/backend-openai';
// Define your backend configuration
const config: BackendConfig = {
workflows: [
// Your workflows here
],
integrations: [
// Use the default OpenAI integration
OpenAI,
// Or add other integrations
],
production: false,
};
// Start the server
const server = serve(config);
You can customize the OpenAI client by using the createOpenAIIntegration
function:
import { createOpenAIIntegration } from '@weweb/backend-openai';
// Create a custom OpenAI integration
const customOpenAI = createOpenAIIntegration({
apiKey: 'your-api-key', // Override environment variable
organization: 'your-org-id', // Optional
baseURL: 'https://your-custom-endpoint.com', // Optional
});
If not specified, the integration will use the following environment variables:
-
OPENAI_API_KEY
- Your OpenAI API key -
OPENAI_ORGANIZATION
- Your OpenAI organization ID (optional) -
OPENAI_BASE_URL
- Custom API endpoint URL (optional)
Generate responses from OpenAI's GPT models.
// Example workflow action
const config = {
type: 'action',
id: 'generate_response',
actionId: 'openai.create_chat_completion',
inputMapping: [
{
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: '$body.question' }
],
temperature: 0.7,
max_tokens: 500
}
]
};
Generate vector embeddings from text for semantic search and similarity.
// Example workflow action
const config = {
type: 'action',
id: 'create_embedding',
actionId: 'openai.create_embeddings',
inputMapping: [
{
model: 'text-embedding-ada-002',
input: '$body.text'
}
]
};
Check content for potentially harmful or sensitive material.
// Example workflow action
const config = {
type: 'action',
id: 'moderate_content',
actionId: 'openai.create_moderation',
inputMapping: [
{
input: '$body.text',
model: 'text-moderation-latest'
}
]
};
Generate images from text descriptions using DALL-E.
// Example workflow action
const config = {
type: 'action',
id: 'generate_image',
actionId: 'openai.generate_image',
inputMapping: [
{
prompt: '$body.description',
model: 'dall-e-3',
size: '1024x1024',
quality: 'standard',
style: 'vivid'
}
]
};
The OpenAI integration includes a detailed schema that defines all input parameters and output structures for each method. This schema is used for validation and documentation.
-
messages
: Array of message objects with role and content -
model
: ID of the model to use (e.g., gpt-4, gpt-3.5-turbo) -
temperature
: Controls randomness (0-2) -
max_tokens
: Maximum tokens to generate -
top_p
: Alternative to temperature for nucleus sampling -
frequency_penalty
: Decreases likelihood of repeating tokens -
presence_penalty
: Increases likelihood of new topics -
stream
: Whether to stream the response -
stop
: Sequences where the API will stop generating
-
input
: Text to embed (string or array of strings) -
model
: Model to use (e.g., text-embedding-ada-002) -
encoding_format
: Format for embeddings (float or base64) -
dimensions
: Number of dimensions for embeddings
-
input
: Text to moderate (string or array of strings) -
model
: Moderation model to use
-
prompt
: Text description of the desired image -
model
: Model to use (e.g., dall-e-3) -
n
: Number of images to generate -
size
: Size of the generated images -
quality
: Quality of the generated images -
style
: Style of the generated images -
response_format
: Format to return the images (url or b64_json)
deno test
Format and lint your code:
deno fmt
deno lint
import type { BackendConfig } from '@weweb/backend-core';
import { serve } from '@weweb/backend-core';
import OpenAI from '@weweb/backend-openai';
const config: BackendConfig = {
workflows: [
{
path: '/chat',
methods: ['POST'],
security: {
accessRule: 'public',
},
inputsValidation: {
body: {
type: 'object',
properties: {
messages: {
type: 'array',
items: {
type: 'object',
properties: {
role: { type: 'string' },
content: { type: 'string' },
},
required: ['role', 'content'],
},
},
},
required: ['messages'],
},
},
workflow: [
{
type: 'action',
id: 'chat_completion',
actionId: 'openai.create_chat_completion',
inputMapping: [
{
messages: '$body.messages',
model: 'gpt-3.5-turbo',
temperature: 0.7,
max_tokens: 1000,
},
],
},
],
},
],
integrations: [OpenAI],
production: false,
};
console.log('Starting OpenAI chat server on http://localhost:8000/chat');
serve(config);