elelem
TypeScript icon, indicating that this package has built-in type declarations

0.0.10 • Public • Published

Elelem

Elelem

Elelem is a simple, opinionated, JSON-typed, and traced LLM framework in TypeScript.

Why another LLM library?

In September 2023, I tried to port MealByMeal, a production LLM-based application, over to LangChain. Caching wasn't supported for chat-based endpoints (specifically for gpt-3.5-turbo). Additionally, the interface for interacting with these endpoints felt quite awkward. Since then, LangChain Expression Language (LCEL) was introduced, but handling and enforcing typed outputs is still repetitive and error-prone. Furthermore, without leveraging gpt-4, the structured outputs are seldom valid. For debugging nuances like retries and parsing errors, the in-built tracing leaves much to be desired. All of these issues led me to create my own lightweight library.

How does Elelem compare to LangChain?

Feature Elelem Langchain
TypeScript library
OpenAI generation support
Cohere generation support
Anthropic generation support
Emphasis on typed LLM outputs
Easily composable multi-step LLM workflows
Convenient API for single chat completions
Caching for OpenAI chat endpoints
OpenTelemetry support
Autogenerated JSON examples in prompts
Python library
Support for many models (Claude, Llama, etc.)
Vector store support
A million other features

Example

Install with npm install elelem or yarn add elelem.

You'll need yarn add zod openai and yarn add ioredis if you're using Redis for caching (see src/elelem.test.ts for an example of setting up caching).

Usage:

import { z } from "zod";
import OpenAI from "openai";
import { elelem, JsonSchemaAndExampleFormatter } from "elelem";

const capitolResponseSchema = z.object({
    capitol: z.string(),
});

const cityResponseSchema = z.object({
    foundingYear: z.string(),
    populationEstimate: z.number(),
});

const llm = elelem.init({
    openai: new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
});

const inputCountry = "USA";

(async () => {
    const { result, usage } = await llm.session(
        "capitol-info-retriever",
        { openai: { model: "gpt-3.5-turbo" } },
        async (c) => {
            const { result: capitol } = await c.openai(
                "get-capitol",
                { max_tokens: 100, temperature: 0 },
                `What is the capitol of the country provided?`,
                inputCountry,
                capitolResponseSchema,
                JsonSchemaAndExampleFormatter,
            );

            const { result: cityDescription } = await c.openai(
                "city-description",
                { max_tokens: 100, temperature: 0 },
                `For the given capitol city, return the founding year and an estimate of the population of the city.`,
                capitol.capitol,
                cityResponseSchema,
                JsonSchemaAndExampleFormatter,
            );

            return cityDescription;
        },
    );

    console.log(result);
    // { foundingYear: '1790', populationEstimate: 705749 }

    console.log(usage);
    // {
    //     completion_tokens: 26,
    //     prompt_tokens: 695,
    //     total_tokens: 721,
    //     cost_usd: 0.0010945
    // }
})();

Viewing Traces on Jaeger

Start Jaeger locally using:

docker run --rm --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.49

Allow publishing traces to Jaeger with the following:

import * as opentelemetry from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";

const sdk = new opentelemetry.NodeSDK({
    serviceName: "your-service-name",
    traceExporter: new OTLPTraceExporter(),
});
sdk.start();

// rest of your code...

process.on('SIGTERM', () => {
    sdk.shutdown()
        .then(() => console.log('Tracing terminated'))
        .catch((error) => console.log('Error terminating tracing', error))
        .finally(() => process.exit(0));
});

When you run your code, your traces will be available at http://localhost:16686/.

What do the traces look like in Jaeger?

Exploring Traces in Jaeger

Tracing in Production

See the OpenTelemetry docs for more information on sending traces to hosted instances of Zipkin, Jaeger, Datadog, etc.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

For Contributors: Running Integration Tests

To run tests, first make sure you have Git, Yarn, and Docker installed. Then checkout the repo and install dependencies:

git clone git@github.com:jrhizor/elelem.git
cd elelem
yarn install

Create a .env file:

OPENAI_API_KEY=<your key>
REDIS=redis://localhost:6379

Start up Redis:

docker run -it -p 6379:6379 redis

Start up Jaeger:

docker run --rm --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.49

Now you're ready to run the unit and integration tests:

yarn test

License

MIT

Package Sidebar

Install

npm i elelem

Weekly Downloads

0

Version

0.0.10

License

MIT

Unpacked Size

204 kB

Total Files

26

Last publish

Collaborators

  • jrhizor