A customizable Node.js package that allows you to create a fake API server for frontend development and testing. It supports both static JSON responses and dynamic content generation using an Ollama Large Language Model (LLM) based on defined schemas.
- Static JSON Responses: Define fixed JSON data for specific API routes.
- Dynamic LLM-Generated Responses: Generate realistic and varied JSON data on the fly using an Ollama model.
- Schema-Driven Generation: Provide a schema (variable names, types, and descriptions) and the LLM will generate data conforming to it.
- Array of Objects Support: Easily request an array of generated objects, with a configurable size (defaults to 15 if not specified for nested arrays).
- CORS Enabled: Automatically handles CORS headers for seamless frontend integration.
- Simple API: Easy to start and stop the server.
To use this package in your project, you can install it :
npm install fakeapi-ai
You need to have Ollama installed and running on your system.
Install Ollama: Follow the instructions on the official Ollama website: https://ollama.ai/
Pull a model: Download a model (e.g., llama2
, mistral
,codellama:13b-instruct
) that you intend to use.
ollama pull llama2
Start the Ollama server:
ollama serve
By default, Ollama runs on http://localhost:11434
.
You can integrate the fakeapi-ai into your development workflow by creating a simple Node.js script to start it.
Create a file (e.g., run-fake-api.js
) in your project's root:
// run-fake-api.js
const { startFakeApiServer, stopFakeApiServer } = require("fakeapi-ai");
const myFakeRoutes = {
GET: {
// --- Static Response Example ---
"/api/static-users": [
{ id: 1, name: "Alice Smith", email: "alice@example.com" },
{ id: 2, name: "Bob Johnson", email: "bob@example.com" },
],
// --- LLM-Generated Single Object Example ---
"/api/generated-user-profile": {
ollama: {
model: "llama2", // Ensure this model is pulled in your Ollama instance
schema: {
id: { type: "number", description: "Unique user identifier" },
username: { type: "string", description: "User's chosen username" },
email: { type: "string", description: "User's email address" },
isActive: {
type: "boolean",
description: "Whether the user account is active",
},
registrationDate: {
type: "string",
description: "Date of registration in YYYY-MM-DD format",
},
address: {
type: "object",
properties: {
street: { type: "string" },
city: { type: "string" },
zipCode: { type: "string" },
},
},
hobbies: {
type: "array",
items: { type: "string" },
arraySize: 3, // Generate 3 hobbies
},
},
options: { temperature: 0.7 }, // Optional: adjust LLM creativity (0.0-2.0)
},
},
// --- LLM-Generated Array of Objects Example ---
"/api/generated-products": {
ollama: {
model: "llama2",
count: 5, // Generate an array of 5 product objects
schema: {
productId: { type: "string", description: "Unique product ID" },
productName: { type: "string", description: "Name of the product" },
price: { type: "number", description: "Price of the product" },
category: { type: "string", description: "Product category" },
inStock: { type: "boolean", description: "Availability status" },
},
},
},
// --- LLM-Generated Report with Nested Array Example ---
"/api/generated-sales-report": {
ollama: {
model: "llama2",
schema: {
reportId: { type: "string", description: "Unique report identifier" },
title: { type: "string", description: "Title of the sales report" },
dateGenerated: {
type: "string",
description: "Current date in YYYY-MM-DD",
},
salesData: {
type: "array",
items: {
type: "object",
properties: {
month: {
type: "string",
description: "Month name (e.g., January)",
},
revenue: {
type: "number",
description: "Total revenue for the month",
},
expenses: {
type: "number",
description: "Total expenses for the month",
},
},
},
arraySize: 3, // Nested array with 3 elements (e.g., for 3 months)
},
summary: {
type: "string",
description: "A brief summary of the sales report data",
},
},
},
},
},
POST: {
"/api/submit-form": { message: "Form data received!", status: "success" },
},
};
const PORT = 8080; // The port your fake API server will listen on
const OLLAMA_URL = "http://localhost:11434"; // Your Ollama server URL
async function runFakeApi() {
try {
await startFakeApiServer({
routes: myFakeRoutes,
port: PORT,
ollamaUrl: OLLAMA_URL, // Pass the Ollama URL here to enable LLM generation
});
console.log(`Fake API is ready on http://localhost:${PORT}`);
// Keep the server running until manually stopped (e.g., Ctrl+C)
process.on("SIGINT", async () => {
console.log("Stopping fake API...");
await stopFakeApiServer();
process.exit(0);
});
} catch (error) {
console.error("Failed to start fake API:", error);
process.exit(1);
}
}
runFakeApi();
Run the server:
node run-fake-api.js
Your frontend application can now make requests to http://localhost:8080/api/...
for the routes you defined.
Initializes and starts the fake API.
-
options
(Object
):-
routes
(Object
): Required. An object defining your API routes.- Keys are HTTP methods (
'GET'
,'POST'
,'PUT'
,'DELETE'
). - Values are objects mapping API paths (e.g.,
'/users'
) to their responses. - A response can be:
- Any static JSON-serializable data (e.g.,
{ message: 'Success' }
,[{ id: 1 }]
). - An
ollama
configuration object for LLM-generated responses.
- Any static JSON-serializable data (e.g.,
- Keys are HTTP methods (
-
port
(number
): Optional. The port to run the server on. Defaults to3000
. -
ollamaUrl
(string
): Optional. The base URL of your Ollama server (e.g.,'http://localhost:11434'
). Required if you useollama
configurations in your routes.
-
-
Returns:
Promise<void>
- Resolves when the server starts, rejects on error.
Used within a route definition to specify LLM-generated responses.
-
model
(string
): Required. The name of the Ollama model to use (e.g.,'llama2'
,'mistral'
). Ensure this model is pulled in your Ollama instance. -
schema
(Object
): Required. An object defining the structure of the JSON data you want the LLM to generate.- Keys are property names.
- Values are
PropertySchema
objects.
-
count
(number
): Optional. The number of top-level objects to generate.- If
1
(default) or omitted, a single JSON object is returned. - If
> 1
, an array ofcount
JSON objects is returned.
- If
-
options
(Object
): Optional. Additional generation options to pass to the Ollama model (e.g.,temperature
,top_k
,num_ctx
). Refer to the Ollama API documentation for available options.
Defines the type and optional details for a property within your schema.
-
type
(string
): Required. The data type for the property. Supported types:'string'
'number'
'boolean'
-
'array'
(requiresitems
property) -
'object'
(requiresproperties
property)
-
properties
(Object
): Required iftype
is'object'
. An object defining the nested properties of the object. Each value is anotherPropertySchema
. -
items
(Object
): Required iftype
is'array'
. APropertySchema
object defining the schema for each element within the array. -
arraySize
(number
): Optional. Specifies the number of elements for an arraytype
. Defaults to15
for nested arrays if not provided. -
description
(string
): Optional. A brief description for the property. This helps guide the LLM in generating more relevant and accurate data.
Stops the fake API server if it's running.
-
Returns:
Promise<void>
- Resolves when the server is stopped.
Feel free to open issues or submit pull requests if you have suggestions or improvements.
This project is licensed under the ISC License.