offload-ai
TypeScript icon, indicating that this package has built-in type declarations

1.2.1 • Public • Published

Offload logo

Offload - Run AI inference on your users' devices

Offload is an SDK that allows you to automatically run AI inference directly on your users' devices.

For a user, having the option to opt-in for local execution means the highest level of security and data privacy, with minimal code changes to your application. Furthermore, your cloud inference cost for the users that enable Offload becomes zero.

How it works

By default, Offload shows a widget to a user when its device has enough resources to run AI locally. When the user clicks on the widget, it enables Offload, and all of the inference tasks are executed directly within the user device, avoiding to send its data to any third-party inference API.

When a user device does not support local inference, the Offload widget is simply not shown, and the inference tasks run via an API that you can configure on the dashboard. This fallback API is also used when, for some reason, a user decides not to enable Offload.

Offload takes care of automatically serving your users with the model that better fits their device resources. We support different models for different GPUs, mobile, desktop, etc.

You can configure different models depending on the target device and adjust prompts per model, track analytics, configure fallback APIs, and more directly on the Offload dashboard.

Let your users opt for privacy! Join us today!

Usage

See below the steps to use Offload. You can find complete usage examples on this repository.

  1. Install offload

    • Using a package manager:
    npm install --save offload-ai

    Then in your code use:

    import Offload from "offload-ai";
    • From library:
    <script src="//unpkg.com/offload-ai" defer></script>

    Then in your code use:

    window.Offload.<method>
  2. Create a prompt in the dashboard and select a model

  3. Configure the SDK:

    Offload.config({
        appUuid: "b370195d-a8ad-47bd-9d25-2818a6905896",
        promptUuids: { // a map for meaningful prompt names <-> prompt id in the dashboard
            user_text: "4e151113-22ae-41e8-abf1-c8b358163cc9"
        }
    });
  4. Add the widget:

    Create a container for the widget on your page:

    <div id="offload-widget-container"></div>

    And initialize the widget (ensure to run the following JS code after the div above exists):

    Offload.Widget('offload-widget-container');
  5. Use the SDK to run prompts:

    try {
        const response = await (window as any).Offload.offload({
            promptKey: "user_text", // the key you give to the prompt uuid in the configuration object.
        });
        console.log(response.text)
        console.log(response.finishReason);
        console.log(response.usage);
    } catch(e: any) {
        console.error(e);
    }

Generate structured data

Add the schema field with a JSON schema:

try {
    const response = await (window as any).Offload.offload({
        promptKey: "user_text", // the key you gave to the prompt uuid in the configuration object.
        // A JSON schema to generate the output
        schema: {
            "$schema": "http://json-schema.org/draft-07/schema#",
            "type": "object",
            "properties": {
                "name": {
                "type": "string"
                },
                "age": {
                "type": "integer"
                }
            }
            }
    });
    console.log(response.object); // Note this is now response.object instead of response.text
    console.log(response.finishReason);
    console.log(response.usage);
} catch(e: any) {
    console.error(e);
}

Streaming the output

Simply add stream: true:

try {
    const response = await (window as any).Offload.offload({
        promptKey: "user_text", // the key you give to the prompt uuid in the configuration object.
        stream: true
    });
    console.log(response.textStream); // Note we now use response.textStream
    // Finish rason and usage are now promises
    response.finishReason.then((reason) => console.log(reason));
    response.usage.then((usage) => console.log(usage));
} catch(e: any) {
    console.error(e);
}

Using prompt variables

Add the variables map:

try {
    const response = await (window as any).Offload.offload({
        promptKey: "user_text", // the key you give to the prompt uuid in the configuration object.
        variables: {
            message: "my user message", // This will substitute the placeholder {{message}} in your prompt
        },
    });
    console.log(response.text)
    console.log(response.finishReason);
    console.log(response.usage);
} catch(e: any) {
    console.error(e);
}

Package Sidebar

Install

npm i offload-ai

Weekly Downloads

9

Version

1.2.1

License

UNLICENSED

Unpacked Size

8.72 MB

Total Files

31

Last publish

Collaborators

  • miguelaeh