A platform for building backends

Backend platform for node.js

General purpose backend framework. The primary goal is to have a scalable platform for running and managing node.js servers for Web services implementation.

This framework only covers the lower portion of the Web services system: node.js processes, HTTP servers, basic API functinality, database access, caching, messaging between processes, metrics and monitoring, a library of tools for developing node.js servers.

For the UI and presentation layer there are no restrictions what to use as long as it can run on top of the Express server.


  • Exposes a set of Web service APIs over HTTP(S) using Express framework.
  • Database API supports Sqlite, PostgreSQL, MySQL, DynamoDB, Cassandra, MongoDB, Redis with all basic operations behaving the same way allowing to switch databases without changing the code.
  • Database driver for LevelDB, LMDB, CouchDB, Riak, ElasticSearch support only a subset of all database operations
  • Easily extendable to support any kind of database, provides a database driver on top of Redis with all supported methods.
  • Provides accounts, connections, locations, messaging and icons APIs with basic functionality for a qucik start.
  • Supports crontab and queue job processing by seperate workers.
  • Authentication is based on signed requests using API key and secret, similar to Amazon AWS signing requests.
  • Runs web server as separate processes to utilize multiple CPU cores.
  • Supports WebSockets connections and process them with the same Express routes as HTTP requests
  • Supports several cache modes(Redis, memcached, LRU) for the database operations.
  • Supports several PUB/SUB modes of operations using Redis, RabbitMQ.
  • Supports jobs processing using several work queue implementations on top of RabbitMQ, Redis, DB
  • Supports common database operations (Get, Put, Del, Update, Select) for all databases using the same DB API.
  • ImageMagick is compiled as C++ module for in-process image scaling.
  • REPL(command line) interface for debugging and looking into server internals.
  • Geohash based location searches supported by all databases drivers.
  • Supports push notifications for mobile devices, APN and GCM
  • Supports HTTP(S) reverse proxy mode where multiple Web workers are load-balanced by the proxy server running in the master process instead of relying on the OS scheduling between processes listening on the same port.
  • Can be used with any MVC, MVVC or other types of frameworks that work on top or with the Express server.
  • Intergated very light unit testing facility which can be used to test modules and API requests
  • Support runtime metrics about the timing on database, requests, cache, memory and request rate limit control
  • Hosted on github, BSD licensed.

Check out the Documentation for more details.

Requirements and dependencies

The module supports several databases and includes ImageMagick interface. In order for such interfaces to be compiled the software must be installed on the system before installing the backendjs. Not everything is required, if not available the interface will be skipped.

The optional packages that the backendjs uses if available(resolving packages is done with pkg-config):

  • ImageMagick - image manipulation
  • libmysql - MySQL database driver

Installing dependencies on CentOS:

yum -y install libpng-devel libjpeg-turbo-devel mysql-devel

Installing dependencies on Mac OS X using macports:

port install libpng jpeg mysql56


To install the module with all optional dependencies if they are available in the system

Note: if for example ImageMagick is not istalled it will be skipped, same goes to all database drivers(MySQL)

npm install backendjs

To force internal ImageMagick to be compiled in the module the following command must be used:

 npm install backendjs --backendjs_imagemagick

This may take some time because of downloading and compiling required dependencies like ImageMagick. They are not required in all applications but still part of the core of the system to be available once needed.

To install from the git

 npm install git+

or simply

 npm install vseryakov/backendjs

Quick start

  • Simplest way of using the backendjs, it will start the server listening on port 8000

      $ node
      > var bkjs = require('backendjs')
      > bkjs.server.start()
  • Same but using the helper tool, by default it will use embedded Sqlite database and listen on port 8000

      bkjs run-backend
  • To start the server and connect to the DynamoDB (command line parameters can be saved in the etc/config file, see below about config files)

      bkjs run-backend -db-pool dynamodb -db-dynamodb-pool default -aws-key XXXX -aws-secret XXXX
  • or to the PostgreSQL server, database backend

      bkjs run-backend -db-pool pgsql -db-pgsql-pool postgresql://postgres@
  • All commands above will behave exactly the same, all required tables will be automatically created

  • While the local backendjs is runnning, the documentation is always available at http://localhost:8000/doc.html (or whatever port is the server using)

  • Go to http://localhost:8000/api.html for the Web console to test API requests. For this example let's create an account, type and execute the following URLs in the Web console:

  • Now login with the new account, click on Login at the top-right corner and enter 'test1' as login and 'test1' as secret in the login popup dialog.

  • If no error message appeared after the login, try to get your current account details:

  • Shutdown the backend by pressing Ctrl-C

  • To make your own custom Web app, create a new directory (somewhere else) to store your project and run the following command from that directory:

      bkjs init-app
  • The app.js file is created in your project directory with 2 additional API endpoints /test/add and /test/[0-9] to show the simplest way of adding new tables and API commands.

  • The script is created for convenience in the development process, it specifies common arguments and can be customized as needed.

  • Run new application now, it will start the Web server on port 8000:

  • Go to http://localhost:8000/api.html and issue command /test/add?id=1&name=1 and then /test/1 commands in the console to see it in action

  • Change in any of the source files will make the server restart automatically letting you focus on the source code and not server management, this mode is only enabled by default in development mode, check for parameters before running it in the production.

  • To start node.js shell with backendjs loaded and initialized, all command line parameters apply to the shell as well

      ./ -shell
  • To access the database while in the shell

      >"bk_account", {}, function(err, rows) { console.log(rows) });
      >"bk_account", {}, db.showResult);
      > db.add("bk_account", { login: 'test2', secret: 'test2', name' Test 2 name', gender: 'f' }, db.showResult);
      >"bk_account", { gender: 'm' }, db.showResult);
  • To add users from the command line

      bksh -add-user login test secret test name TestUser email
  • To see current metrics run the command in the console '/system/stats/get'

  • To see charts about accumulated metrics go to http://localhost:8000/metrics.html

Backend runtime

When the backendjs server starts it spawns several processes the perform different tasks.

There are 2 major tasks of the backend that can be run at the same time or in any combination:

  • a Web server (server) with Web workers (web)
  • a job scheduler (master)

These features can be run standalone or under the guard of the monitor which tracks all running processes and restarted any failed ones.

This is the typical output from the ps command on Linux server:

ec2-user    891  0.0  0.6 1071632 49504 ?  Ssl  14:33   0:01 bkjs: monitor
ec2-user    899  0.0  0.6 1073844 52892 ?  Sl   14:33   0:01 bkjs: master
ec2-user    908  0.0  0.8 1081020 68780 ?  Sl   14:33   0:02 bkjs: server
ec2-user    917  0.0  0.7 1072820 59008 ?  Sl   14:33   0:01 bkjs: web
ec2-user    919  0.0  0.7 1072820 60792 ?  Sl   14:33   0:02 bkjs: web

To enable any task a command line parameter must be provided, it cannot be specified in the config file. The bkjs utility supports several commands that simplify running the backend in different modes.

  • bkjs run-backend - runs the Web server and the jobs scheduler in debug mode with watching source files for changes, this is the common command to be used in development, it passes the command line switches: -log debug -watch -web -master
  • bkjs run-server - this command is supposed to be run at the server startup, it runs in the backgroud and the monitors all tasks, the command line parameters are: -daemon -monitor -master -syslog
  • bkjs run - this command runs the Web server and the job scheduler without any other parameters, all aditional parameters can be added in the command line, this command is a barebone helper to be used with any other custom settings.
  • bkjs run-shell or bksh - start backendjs shell, no API or Web server is initialized, only the database pools

Application structure

The main puspose of the backendjs is to provide API to access the data, the data can be stored in the database or some other way but the access to that data will be over HTTP and returned back as JSON. This is default functionality but any custom application may return data in whatever format is required.

Basically the backendjs is a Web server with ability to perform data processing using local or remote jobs which can be scheduled similar to Unix cron.

The principle behind the system is that nowadays the API services just return data which Web apps or mobiles apps can render to the user without the backend involved. It does not mean this is simple gateway between the database, in many cases it is but if special processing of the data is needed before sending it to the user, it is possible to do and backendjs provides many convenient helpers and tools for it.

When the API layer is initialized, the api module contains app object which is an Express server.

Special module/namespace app is designated to be used for application development/extension. This module is available the same way as the api or core which makes it easy to refer and extend with additional methods and structures.

The typical structure of a backendjs application is the following (created by the bkjs init-app command):

    var bkjs = require('backendjs');
    var api = bkjs.api;
    var app =;
    var db = bkjs.db;
    // Describe the tables or data model 
     // Optionally customize the Express environment, setup MVC routes or else, is the Express server 
    app.configureMiddleware = function(optionscallback)
    // Register API endpoints, i.e. url callbacks 
    app.configureWeb = function(optionscallback)
    {'/some/api/endpoint', function(reqres) { ... });
    // Optionally register post processing of the returned data from the default calls 
    api.registerPostProcess('', /^\/account\/([a-z\/]+)$/, function(reqresrows) { ... });
    // Optionally register access permissions callbacks 
    api.registerAccessCheck('', /^\/test\/list$/, function(reqstatuscallback) { ...  });
    api.registerPreProcess('', /^\/test\/list$/, function(reqstatuscallback) { ...  });

Except the app.configureWeb and server.start() all other functions are optional, they are here for the sake of completness of the example. Also because running the backend involves more than just running web server many things can be setup using the configuration options like common access permissions, configuration of the cron jobs so the amount of code to be written to have fully functionaning production API server is not that much, basically only request endpoint callbacks must be provided in the application.

As with any node.js application, node modules are the way to build and extend the functionality, backendjs does not restrict how the application is structured.

Another way to add functionality to the backend is via external modules specific to the backend, these modules are loaded on startup from the backend home subdirectory modules/ and from the backendjs package directory for core modules. The format is the same as for regular node.js modules and only top level .js files are loaded on the backend startup.

By default no modules are loaded, it must be configured by the -allow-modules config parameter.

Once loaded they have the same access to the backend as the rest of the code, the only difference is that they reside in the backend home and can be shipped regardless of the npm, node modules and other env setup. These modules are exposed in the core.modules the same way as all other core submodules methods.

Let's assume the modules/ contains file facebook.js which implements custom FB logic:

     var bkjs = require("backendjs");
     var fb = {}
     module.exports = fb;
     fb.configureWeb = function(optionscallback) {

This is the main app code:

    var bkjs = require("backendjs");
    var core = bkjs.core;
    var fb;
    // Using facebook module in the main app"some url", function(reqres) {
       fb = core.modules.facebook;
       fb.makeRequest(function(errdata) {

Database schema definition

The backend support multiple databases and provides the same db layer for access. Common operations are supported and all other specific usage can be achieved by using SQL directly or other query language supported by any particular database. The database operations supported in the unified way provide simple actions like db.get, db.put, db.update, db.del, The db.query method provides generic access to the database driver and executes given query directly by the db driver, it can be SQL or other driver specific query request.

Before the tables can be queried the schema must be defined and created, the backend db layer provides simple functions to do it:

  • first the table needs to be described, this is achieved by creating a Javascript object with properties describing each column, multiple tables can be described at the same time, for example lets define album table and make sure it exists when we run our application:
           album: {
               id: { primary: 1 },                         // Primary key for an album 
               name: { pub: 1 },                           // Album name, public column 
               mtime: { type: "bigint" },                  // Modification timestamp 
           photo: {
               album_id: { primary: 1 },                   // Combined primary key 
               id: { primary: 1 },                         // consiting of album and photo id 
               name: { pub: 1, index: 1 },                 // Photo name or description, public column with the index for faster search 
               mtime: { type: "bigint" }
  • the system will automatically create the album and photos tables, this definition must remain in the app source code and be called on every app startup. This allows 1) to see the db schema while working with the app and 2) easily maintain it by adding new columns if necessary, all new columns will be detected and the database tables updated accordingly. And it is all Javascript, no need to learn one more language or syntax to maintain database tables.

Each database may restrict how the schema is defined and used, the db layer does not provide an artificial layer hiding all specifics, it just provides the same API and syntax, for example, DynamoDB tables must have only hash primary key or combined hash and range key, so when creating table to be used with DynamoDB, only one or two columns can be marked with primary property while for SQL databases the composite primary key can consist of more than 2 columns.

The backendjs always creates several tables in the configured database pools by default, these tables are required to support default API functionality and some are required for backend opertions. Refer below for the Javascript modules documenttion that described which tables are created by default. In the custom applications the db.describeTables method can modify columns in the default table and add more columns if needed.

For example, to make age and some other columns in the accounts table public and visible by other users with additional columns the following can be done in the api.initApplication method. It will extend the bk_account table and the application can use new columns the same way as the already existing columns. Using the birthday column we make 'age' property automatically calculated and visible in the result, this is done by the internal method api.processAccountRow which is registered as post process callback for the bk_account table. The computed property age will be returned because it is not present in the table definition and all properties not defined and configured are passed as is.

The cleanup of the public columns is done by the api.sendJSON which is used by all API routes when ready to send data back to the client. If any postprocess hooks are registered and return data itself then it is the hook responsibility to cleanup non-public columns.

        bk_account: {
            gender: { pub: 1 },
            birthday: {},
            ssn: {},
            salary: { type: "int" },
            occupation: {},
            home_phone: {},
            work_phone: {},
    app.configureWeb = function(optionscallback)
       db.setProcessRow("post", "bk_account", this.processAccountRow);
    app.processAccountRow = function(oprowoptionscols)
       if (row.birthday) row.age = Math.floor(( - core.toDate(row.birthday))/(86400000*365));

Example of TODO application

Here is an example how to create simple TODO application using any database supported by the backend. It supports basic operations like add/update/delete a record, show all records.

Create a file named app.js with the code below.

    var bkjs = require('backendjs');
    var api = bkjs.api;
    var lib = bkjs.lib;
    var app =;
    var db = bkjs.db;
    // Describe the table to store todo records 
       todo: {
           id: { type: "uuid", primary: 1 },  // Store unique task id 
           due: {},                           // Due date 
           name: {},                          // Short task name 
           descr: {},                         // Full description 
           mtime: { type: "bigint", now: 1 }  // Last update time in ms 
    // API routes 
    app.configureWeb = function(optionscallback)
    {^\/todo\/([a-z]+)$/, function(reqres) {
           var options = api.getOptions(req);
           switch (req.params[0]) {
             case "get":
                if (! return api.sendReply(res, 400, "id is required");
                db.get("todo", { id: }, options, function(errrows) { api.sendJSON(req, err, rows); });
             case "select":
                options.noscan = 0; // Allow empty scan of the whole table if not query is given, disabled by default 
      "todo", req.query, options, function(errrows) { api.sendJSON(req, err, rows); });
            case "add":
                if (! return api.sendReply(res, 400, "name is required");
                // By default due date is tomorrow 
                if (req.query.due) req.query.due = lib.toDate(req.query.due, + 86400000).toISOString();
                db.add("todo", req.query, options, function(errrows) { api.sendJSON(req, err, rows); });
            case "update":
                if (! return api.sendReply(res, 400, "id is required");
                db.update("todo", req.query, options, function(errrows) { api.sendJSON(req, err, rows); });
            case "del":
                if (! return api.sendReply(res, 400, "id is required");
                db.del("todo", { id: }, options, function(errrows) { api.sendJSON(req, err, rows); });

Now run it with an option to allow API access without an account:

node app.js -log debug -web -api-allow-path /todo

To use a different databse, for example PostgresSQL(running localy) or DynamoDB(assuming EC2 instance), all config parametetrs can be stored in the etc/config as well

node app.js -log debug -web -api-allow-path /todo -db-pool dynamodb -db-dynamodb-pool default
node app.js -log debug -web -api-allow-path /todo -db-pool pgsql -db-pgsql-pool default

API commands can be executed in the browser or using curl:

curl 'http://localhost:8000/todo?name=TestTask1&descr=Descr1&due=2015-01-01`
curl 'http://localhost:8000/todo/select'

API endpoints provided by the backend

All API endpoints are optional and can be disabled or replaced easily. By default the naming convention is:


Any HTTP methods can be used because its the command in the URL that defines the operation. The payload can be urlencoded query parameters or JSON or any other format supported by any particular endpoint. This makes the backend universal and usable with any environment, not just a Web browser. Request signature can be passed in the query so it does not require HTTP headers at all.

  • /auth

    This API request returns the current user record from the bk_auth table if the request is verified and the signature provided is valid. If no signature or it is invalid the result will be an error with the corresponding error code and message.

    By default this endpoint is secured, i.e. requires valid signature. It can be used in anonymous mode as well thus allowing to clear cookies uncodnitionally, set config api-allow-anonymous=/auth.


    • _session=1 - if the call is authenticated a cookie with the session signature is returned, from now on all requests with such cookie will be authenticated, the primary use for this is Web apps

    • _session=0 - clears all sessions cookies, if no session or no cookies provided returns an error for not authenticated request

    • _accesstoken=1 - returns new access token to be used for subsequent requests without a signature for the current account, the token is short lived with expiration date returned as well. This access token can be used instead of a signature and is passed in the query as bk-access-token=TOKEN.


            { id: "NNNNN...", alias: "Test User", "bk-access-token": "XXXXX....", "bk-access-token-age": 604800000 }


The accounts API manages accounts and authentication, it provides basic user account features with common fields like email, name, address.

This is implemented by the accounts module from the core. To disable accounts functionality specify -deny-modules=accounts.

  • /account/get

    Returns information about current account or other accounts, all account columns are returned for the current account and only public columns returned for other accounts. This ensures that no private fields ever be exposed to other API clients. This call also can used to login into the service or verifying if the given login and secret are valid, there is no special login API call because each call must be signed and all calls are stateless and independent.


    • no id is given, return only one current account record as JSON
    • id=id,id,... - return information about given account(s), the id parameter can be a single account id or list of ids separated by comma
    • _session=1 - after successful login setup a session with cookies so the Web app can perform requests without signing every request anymore
    • _accesstoken=1 - after successful login, return new access token that ca be used to make requests without signing every request, it can be passed in the query or headers with the name bk-access-token

    Note: When retrieving current account, all properties will be present including the location, for other accounts only the properties marked as pub in the bk_account table will be returned.


        { "id": "57d07a4e28fc4f33bdca9f6c8e04d6c3",
          "alias": "Test User",
          "name": "Real Name",
          "mtime": 1391824028,
          "latitude": 34,
          "longitude": -118,
          "geohash": "9qh1",
          "login": "testuser",
  • /account/logout

    Logout the current user, clear session cookies if exist. For pure API access with the signature this will not do anything on the backend side.

  • /account/add

    Add new account, all parameters are the columns from the bk_account table, required columns are: name, secret, login.

    By default, this URL is in the list of allowed paths that do not need authentication, this means that anybody can add an account. For the real application this may not be a good choice so the simplest way to disable it to add api-disallow-path=^/account/add$ to the config file or specify in the command line. More complex ways to perform registration will require adding pre and.or post callbacks to handle account registration for example with invitation codes....

    In the table bk_auth, the column type is used to distinguish between account roles, by default only account with type admin can add other accounts with this type specified, this column can also be used in account permissions implementations. Because it is in the bk_auth table, all columns of this table are available as req.account object after the successful authentication where req is Express request object used in the middleware parameters.

    Note: secret and login can be anything, the backend does not require any specific formats and does not process the contents of the login/sectet fields. In the Web client if Bkjs.scramble is set to 1 then the secret is replaced by the HMAC value derived from the login and sent to the server, no actual login/secret are ever saved, only used in the login form.



    How to make an account as admin

          # Run backend shell
          bkjs run-shell
          # Update record by login
          > db.update("bk_auth", { login: 'login@name', type: 'admin' });
  • /account/select

    Return list of accounts by the given condition, calls for bk_account table. Parameters are the column values to be matched and all parameters starting with underscore are control parameters that goes into options of the call with underscore removed. This will work for SQL databases only because DynamoDB or Cassandra will not search by non primary keys. In the DynamoDB case this will run ScanTable action which will be very expensive for large tables. Supports special query parameters _select,_ops, see docs about for more info.




          {  "data": [{
                        "id": "57d07a4e28fc4f33bdca9f6c8e04d6c3",
                        "alias": "Test User1",
                        "name": "User1",
                        "mtime": 1391824028,
                        "login": "test1",
                        "id": "57d07a4e2824fc43bd669f6c8e04d6c3",
                        "alias": "Test User2",
                        "name": "User2",
                        "mtime": 1391824028,
                        "login": "test2",
              "next_token": ""
  • /account/del

    Delete current account, after this call no more requests will be authenticated with the current credentials

  • /account/update

    Update current account with new values, the parameters are columns of the table bk_account, only columns with non empty values will be updated.


  • /account/put/secret

    Change account secret for the current account, no columns except the secret will be updated and expected.


    • secret - new secret for the account
    • token_secret - set to 1 to reset access token secret to a new value thus revoking access from existing access tokens


  • /account/subcribe

    Subscribe to account events delivered via HTTP Long Poll, a client makes the connection and waits for events to come, whenever somebody updates the account's counter or send a message or creates a connection to this account the event about it will be sent to this HTTP connection and delivered as JSON object. This is not a persistent queue so if not listening, all events will just be ignored, only events published since the connect will be delivered. To specify what kind of events needs to be delivered, match query parameter can be specified which is a RegExp of the whole event body string.

    Note: On the server side there is a config parameter api-subscribe-interval which defines how often to deliver notifications, by default it is 5 seconds which means only every 5 seconds new events will be delivered to the Web client, if more than one event happened, they all accumulate and will be sent as a JSON list.


      // To run in the browser:
      (function poll() {
          Bkjs.send({ url: "/account/subscribe", complete: poll }, function(data) {
              console.log("received event:", data);


      [ { "path": "/message/add", "mtime:" 1234566566, "type": "1" },
        { "path": "/counter/incr", "mtime:" 1234566566, "type": "like,invite" } },
        { "path": "/connection/add", "mtime": 1223345545, "type": "like" } ]
  • /account/select/icon

    Return a list of available account icons, icons that have been uploaded previously with /account/put/icon calls. The url property is an URL to retrieve this particular icon.


    • id - if specified then icons for the given account will be returned




      [ { id: '12345', type: '1', url: '/account/get/icon?id=12345&type=1' },
        { id: '12345', type: '2', url: '/account/get/icon?id=12345&type=2' } ]
  • /account/get/icon

    Return an account icon, the icon is returned in the body as binary BLOB, if no icon with specified type exists, i.e. never been uploaded then 404 is returned.


    • type - a number from 0 to 9 or any single letter a..z which defines which icon to return, if not specified 0 is used


  • /account/put/icon

    Upload an account icon, once uploaded, the next /account/get call will return propertis in the format iconN wheer N is any of the type query parameters specified here, for example if we uploaded an icon with type 5, then /account/get will return property icon5 with the URL to retrieve this icon. By default all icons uploaded only accessible for the account which uploaded them.


    • type - icon type, a number between 0 and 9 or any single letter a..z, if not specified 0 is used
    • icon - can be passed as base64 encoded image in the query,
      • can be passed as base64 encoded string in the body as JSON, like: { type: 0, icon: 'iVBORw0KGgoA...' }, for JSON the Content-Type HTTP headers must be set to application/json and data should be sent with POST request
      • can be uploaded from the browser using regular multi-part form
    • acl_allow - icon access permissions:
      • "" (empty) - only own account can access
      • all - public, everybody can see this icon
      • auth - only authenticated users can see this icon
      • id,id.. - list of account ids that can see this account
    • _width - desired width of the stored icon, if negative this means do not upscale, if th eimage width is less than given keep it as is
    • _height - height of the icon, same rules apply as for the width above
    • _ext - image file format, default is jpg, supports: gif, png, jpg, jp2


  • /account/del/icon

    Delete account icon


    • type - what icon to delete, if not specified 0 is used


  • `/account/get/status Return status for the account by id, if no id is psecified return status for the current account.

    The system maintains account status with the timestamp to be used for presence or any other purposes. The bk_status table can be cached with any available caching system like Redis, memcache to be very fast presence state system.


  • /account/put/status Set the status of the current account, requires status parameter, automatically updates the timestamp


  • /account/del/status Delete current account status, mostly for clearing the cache or marking offline status

When running with AWS load balancer there should be a url that a load balancer polls all the time and this must be very quick and lightweight request. For this purpose there is an API endpoint /ping that just responds with status 200. It is not open by default, the allow-path or other way to allow non-authenticted access needs to be configured. This is to be able to control how pinging can be perform in the apps in cae it is not simple open access.

This endpoint can server any icon uploaded to the server for any account, it is supposed to be a non-secure method, i.e. no authentication will be performed and no signagture will be needed once it is confgiured which prefix can be public using api-allow or api-allow-path config parameters.

The format of the endpoint is:

  • /image/prefix/id/type


      # Configure accounts icons to be public in the etc/config
      # Or pass in the command line
      ./ -api-allow-path /image/account/
      # Make requests
      #Return icons for account 12345 for types 0 and 1

The icons API provides ability for an account to store icons of different types. Each account keeps its own icons separate form other accounts, within the account icons can be separated by prefix which is just a namespace assigned to the icons set, for example to keep messages icons separate from albums, or use prefix for each separate album. Within the prefix icons can be assigned with unique type which can be any string.

Prefix and type can consist from alphabetical characters and numbers, dots, underscores and dashes: [a-z0-9._-]. This means, they are identificators, not real titles or names, a special mapping between prefix/type and album titles for example needs to be created separately.

The supposed usage for type is to concatenate common identifiers first with more specific to form unique icon type which later can be queried by prefix or exactly by icon type. For example album id can be prefixed first, then sequential con number like album1:icon1, album1:icon2.... then retrieving all icons for an album would be only query with album1: prefix.

The is implemented by the icons module from the core. To disable this functionality specify -deny-modules=icons.

  • /icon/get

    Return icon for the current account in the given prefix, icons are kept on the local disk in the directory configured by -api-images-dir parameter(default is images/ in the backend directory). Current account id is used to keep icons separate from other accounts. Icon presense is checked in the bk_icon table before returning it and if any permissions are set in the acl_allow column it will be checked if this icon can be returned.

    The following parameters can be used:

    • prefix - must be specified, this defines the icons namespace
    • type is used to specify unique icon created with such type which can be any string.
  • /icon/put

    Upload new icon for the given account in the folder prefix, if type is specified it creates an icons for this type to separate multiple icons for the same prefix. type can be any string consisting from alpha and digits characters. It creates a record in the bk_icon table with all the paramaters passed.

    The following parameters can be used:

    • prefix - prefix for the icons, requried
    • descr - optional description of the icon
    • latitude, longitude - optional coordinates for the icon
    • acl_allow - allow access permissions, see /account/put/icon for the format and usage
    • _width - desired width of the stored icon, if negative this means do not upscale, if th eimage width is less than given keep it as is
    • _height - height of the icon, same rules apply as for the width above
    • _ext - image file format, default is jpg, supports: gif, png, jpg
  • /icon/upload

    Upload a new image and store on the server, no record is created in bk_icon table, just simple image upload, but all the same query parameters as for /icon/put are accepted. Returns an JSON object with url property being the full path to the uploaded image.

  • /icon/del

    Delete the default icon for the current account in the folder prefix or by type

  • /icon/select

    Return list of available icons for the given prefix adn type, all icons starting with prefix/type will be returned, the url property will provide full URL to retrieve the icon contents




      [ { id: 'b3dcfd1e63394e769658973f0deaa81a', type: 'me-1', icon: '/icon/get?prefix=album&type=me1' },
        { id: 'b3dcfd1e63394e769658973f0deaa81a', type: 'me-2', icon: '/icon/get?prefix=album&type=me2' } ]
      [ { id: 'b3dcfd1e63394e769658973f0deaa81a', type: '12345-f0deaa81a', icon: '/icon/get?prefix=album&type=12345-f0deaa81a' } ]

The file API provides ability to store and retrieve files. The operations are similar to the Icon API.

This is implemented by the files module from the core. To disable this functionality specify -deny-modules=files.

  • /file/get

    Return a file with given prefix and name, the contents are returned in the response body.

    The following parameters can be used:

    • prefix - must be provided, defines the namescape where the file is stored
    • name - name of the file, required
  • /file/put

    Store a file on the backend, the file can be sent using form multipart upload or as JSON

    The following parameters can be used:

    • prefix - must be provided, defines the namescape where the file is stored
    • name - name of the file, required
    • _name - name of the property that contaibs the file contents, for use with JSON or defines the name of the file attribute for multipart upload
    • _tm - append the current timestamp to the file name
    • _ext - extention to be assign to the file, otherwise the actual extension from the file name is used
  • /file/del

    Delete file, prefix and name must be given

The connections API maintains two tables bk_connection and bk_reference for links between accounts of any type. bk_connection table maintains my links, i.e. when i make explicit connection to other account, and bk_reference table is automatically updated with reference for that other account that i made a connection with it. No direct operations on bk_reference is allowed.

This is implemented by the connections module from the core. To disable this functionality specify -deny-modules=connections.

  • /connection/add

  • /connection/put Create or replace a connection between two accounts, required parameters are:

    • peer - id of account to connect to
    • type - type of connection, like,dislike,....
    • _connected - the reply will contain a connection record if the other side of our connection is connected to us as well
    • _publish - notify another account about this via pub/sub messaging system if it is active
    • _noreference - do not create the reference record for this connection
    • _nocounter - do not auto increment any counters

    This call automatically creates a record in the bk_reference table which is reversed connection for easy access to information like ''who is connected to me'' and auto-increment like0, like1 counters for both accounts in the bk_counter table.

    Also, this call updates the counters in the bk_counter table for my account which match the connection type, for example if the type of connection is 'invite' and the bk_counter table contain 2 columns invite0 and invite1, then both counters will be increased.


  • /connection/update

  • /connection/incr Update other properties of the existing connection, for connections that may take more than i step or if a connection has other data associated with it beside the type of the connection.


  • /connection/del Delete existing connection(s), id and/or type may be be specified, if not all existing connections will be deleted.


  • /connection/get Return a single connection for given id


    • peer - account id of the connection, required
    • type - connection type, required




      { "id": "1111",
        "type: "like",
        "peer": "12345",
        "mtime": "2434343543543" }
  • /reference/get Return a single reference record for given account id, works the same way as /connection/get

  • /connection/select Receive all my connections of the given type, i.e. connection(s) i made, if id is given only one record for the specified connection will be returned. Supports special query parameters _select,_ops,_desc, see docs about for more info. All options can be passed in the query with prepended underscore.

    By default only connection columns will be returned, specifying _accounts=1 will return public account columns as well.


      # Return all accounts who i invited
      # Return connection for specific type and account id
      # Return accounts who i invited me after specified mtime
      # Return accounts who i invited before specified mtime


      { "data": [ { "id": "111",
                    "type": "invite",
                    "peer": "12345",
                    "status": "",
                    "mtime": "12334312543"
        "next_token": ""
  • /reference/select Receive all references that connected with my account, i.e. connections made by somebody else with me, works the same way as for connection query call


      # Return all accounts who invited me
      # Return accounts who invited me after specified mtime


      { "data": [ { "id": "111",
                    "type": "invite",
                    "peer": "12345",
                    "status": "",
                    "mtime": "12334312543"
        "next_token": ""

The location API maintains a table bk_location with geolocation coordinates for accounts and allows searching it by distance. The configuration parameter min-distance defines the radius for the smallest bounding box in km containing single location, radius searches will combine neighboring boxes of this size to cover the whole area with the given distance request, also this affects the length of geohash keys stored in the bk_location table. By default min-distance is 5 km which means all geohashes in bk_location table will have geohash of size 4. Once min-distance is set it cannot be changed without rebuilding the bk_location table with new geohash size.

The location search is implemented by using geohash as a primary key in the bk_location table with the account id as the second part of the primary key, for DynamoDB this is the range key. When request comes for all matches for the location for example 37.7, -122.4, the search that is executed looks like this:

  • geohash for latitude 37.7 and longitude -122.4 and radius 10 km will be 9q8y
  • all neoghboring ares around this point within 10 km radius will be '9q8z', '9q8v', '9q8w', '9q8x', '9q8t', '9q9n', '9q9p', '9q9j'
  • we start the search on the bk_location table by the primary key geohash with the value 9q8y
  • filter out all records beyond our radius by calculating the difference between our point and the candidate record
  • if total number of results expcted is still less than required, continue to the next neighbor area
  • continue untill we visit all neighbors or received required number of macthed records
  • on return the next_token opaque value will be provided if we want to continue the search for more matched for the same location

This is implemented by the locations module from the core. To disable this functionality specify -deny-modules=locations.

  • /location/put Store currenct location for current account, latitude and longitude parameters must be given, this call will update the bk_account table as well with these coordinates


  • /location/get Return matched accounts within the distance(radius) specified by distance= parameter in kilometers and current position specified by latitude/longitude paraemeters. This call returns results in chunks and requires navigation through all pages to receive all matched records. Records returned will start with the closest to the current point. If there are more matched records than specified by the _count, the next_token property is set with the token to be used in the subsequent call, it must be passed as is as _token= parameter with all original query parameters.

    By default only locations with account ids will be returned, specifying _accounts=1 will return public account columns as well.

    Note: The current account will not be present in the results even if it is within the range, to know my own location use /account/get call.




         { "data": [ { "id": "12345",
                       "distance": 5,
                       "latitude": -118.123,
                       "longitude": 23.45
                       "mtime": "12334312543"
                     { "id": "45678",
                       "distance": 5,
                       "latitude": -118.133,
                       "longitude": 23.5
                       "mtime": "12334312543"
           "next_token": ""

The messaging API allows sending and recieving messages between accounts, it supports text and images. All new messages arrive into the bk_messsage table, the inbox. The client may keep messages there as new, delete or archive them. Archiving means transfering messages into the bk_archive table. All sent messages are kept in the bk_sent table.

This is implemented by the messages module from the core. To disable this functionality specify -deny-modules=messages.

  • /message/get Read all new messages, i.e. the messages that never been read or issued /message/archive call.


    • _archive - if set to 1, all returned messages will be archived automatically, so no individual /message/read call needed
    • _trash - if set to 1, all returned messages will be deleted, not archived
    • _accounts - if set to 1, return associated account details for the sender


      # Get all new messages
      # Get all new messages and archive them
      # Get all new messages from the specific sender
  • /message/get/archive Receive archived messages. The images are not returned, only link to the image in icon property of reach record, the actual image data must be retrieved separately.


    • mtime - if specified then only messages received since that time will be retirned, it must be in milliseconds since midnight GMT on January 1, 1970, this is what return in Javascript.
    • sender - if specified then all messages from the given sender will be returned.

    NOTE: The mtime is when the backend server received the message, if client and the server clocks are off this may return wrong data or not return anything at all, also because the arrival order of the messages cannot be guaranteed, sending fast multiple messages may be received in different order by the backend and this will result in mtimes that do not correspond to actual times when the message has been sent.


      # Get all messages
      # Get all messages received after given mtime
      # Get all messages received before given mtime
      # Get all messages with custom filter: if msg text contains Hi
      # Get all messages from the specific sender


      { "data": [ { "sender": "12345",
                    "msg": "Hi, how r u?",
                    "mtime": "12334312543"
                  { "sender": "45678",
                    "msg": "check this out!",
                    "icon": "/message/image?sender=45678&mtime=12334312543",
                    "mtime": "12334312543"
           "next_token": ""
  • /message/get/sent Return all messages i sent out. All the same query rules apply as for the archived messages API call.


    • recipient - id of the recipient where i have sent messages
    • mtime - time before or after messages sent, defined by _ops parametrs


  • /message/add Send a message to an account, the following parameters must be specified:

    • id - recipient account id
    • msg - text of the message, can be empty if icon property exists
    • icon - icon of the message, it can be base64 encoded image in the query or JSON string if the whole message is posted as JSON or can be a multipart file upload if submitted via browser, can be omitted if msg/connection/get?type=invite&id=12345 property exists.
    • _nosent - do not save this message in my sent messages
    • _publish - notify another account about this via pub/sub messaging system if it is active


  • /message/archive Move a new message to the archive. The required query parameters are sender and mtime.


  • /message/update Update a message, can be used to keep track of read/unread status, etc...


  • /message/update/archive Update a message in the archive.

  • /message/del Delete new message(s) by sender and/or mtime which must be passed as query parameters. If no mtime is given, all messages from the given sender will be deleted.


  • /message/del/archive Delete archived message(s) by sender and/or mtime which must be passed as query parameters. If no mtime is given, all messages from the given sender will be deleted.


  • /message/del/sent Delete the message(s) by recipient and/or mtime which must be passed as query parameters. If no mtime is given, all messages to the given recipient will be deleted.


  • /message/image Return the image data for the given message, the required parameters are:

    • sender - id of the sender returned in the by /message/get reply results for every message
    • mtime - exact timestamp of the message

The counters API maintains realtime counters for every account records, the counters record may contain many different counter columns for different purposes and is always cached with whatever cache service is used, by default it is cached by the Web server process on every machine. Web worker processes ask the master Web server process for the cached records thus only one copy of the cache per machine even in the case of multiple CPU cores.

This is implemented by the counters module from the core. To disable this functionality specify -deny-modules=counters|accounts.

  • /counter/get Return counter record for current account with all available columns of if id is given return public columns for given account, it works with bk_counter table which by default defines some common columns:

    • ping - a counter for general use, can be used to send a notification event to any acount by increasing this counter for an account
    • like0 - how many i liked, how many time i liked someone, i.e. made a new record in bk_connection table with type 'like'
    • like1 - how many liked me, reverse counter, who connected to me with type 'like' More columns can be added to the bk_counter table.

    NOTE: The columns with suffixes 0 and 1 are special columns that support the Connections API, every time a new connection is created, the type of new connection is checked against any columns in the bk_counter table, if a property type0 exists and marked in the table descriptnio as autoincr then the corresponding counter property is increased, this is how every time new connectio like/dislike/invite/follow is added, the counters in the bk_counter table are increased.

  • /counter/put Replace my counters record, all values if not specified will be set to 0

  • /counter/incr Increase one or more counter fields, each column can provide a numeric value and it will be added to the existing value, negative values will be substracted. if id parameter is specified, only public columns will be increased for other account.



The data API is a generic way to access any table in the database with common operations, as oppose to the any specific APIs above this API only deals with one table and one record without maintaining any other features like auto counters, cache...

Because it exposes the whole database to anybody who has a login it is a good idea to disable this endpoint in the production or provide access callback that verifies who can access it.

  • To disable this endpoint completely in the config: deny-modules=data

  • To allow admins to access it only in the config: api-allow-admin=^/data

  • To allow admins to access it only:

    api.registerPreProcess('GET', '/data', function(req, status, cb) { if (req.account.type != "admin") return cb({ status: 401, message: 'access denied' }; cb(status)); });

This is implemented by the data module from the core.

  • /data/columns

  • /data/columns/TABLE Return columns for all tables or the specific TABLE

  • /data/keys/TABLE Return primary keys for the given TABLE

  • /data/(select|search|list|get|add|put|update|del|incr|replace)/TABLE Perform database operation on the given TABLE, all options for the db functiobns are passed as query parametrrs prepended with underscore, regular parameters are the table columns.

    By default the API does not allow table scans without a condition to avoid expensive and long queries, to enable a scan pass _noscan=0. For this to work the Data API must be configured as unsecure in the config file using the parameter api-unsecure=data.

    Some tables like messages and connections perform data convertion before returning the results, mostly splitting combined columns like type into separate fields. To return raw data pass the parameter _noprocessrows=1.



The pages API provides a simple Wiki like system with Markdown formatting. It keeps all pages in the database table bk_pages and exposes an API to manage and render pages.

The pages support public mode, all pages with pub set to true will be returning without an account, this must be enabled with api-allow-path=^/pages/(get|select|show) to work.

All .md files will be rendered into html automatically if there is not _raw=1 query parameter and pages view exists (api-pages-view=pages.html by default).

This is implemented by the pages module from the core. To disable this functionality specify -deny-modules=accounts.

  • /pages/get/ID Return a page with given id or the main page if id is empty. If the query parameter _render=1 is given, the content will be rendered into html from markdown, otherwie returns all data as is.

  • /pages/select Return all pages or only ones which match the query criteria. This potentially scans the whole table to return all pages and is used to show pages index.

  • /pages/put Replace or add a new page.

  • /pages/del Delete a page from the database

  • /pages/show/ID Render a page with given id, markdown is converted into html using marked. A view must be condfigured in order to render to work, by default pages.html view is provided to simply wrap the markdown in the page layout.

The system API returns information about the backend statistics, allows provisioning and configuration commands and other internal maintenance functions. By default is is open for access to all users but same security considerations apply here as for the Data API.

This is implemented by the system module from the core. To disable this functionality specify -deny-modules=accounts.

  • /system/restart Perform restart of the Web processes, this will be done gracefully, only one Web worker process will be restarting while the other processes will keep serving requests. The intention is to allow code updates on live systems without service interruption.

  • /system/cache/(init|stats|keys|get|set|put|incr|del|clear) Access to the caching functions

  • /system/msg/(msg) Access to the messaging functions

  • /system/params Return all config parameters applied from the config file(s) or remote database.

  • /system/stats/get Database pool statistics and other diagnostics

    • latency - how long a pending request waits in queue at this moment
    • busy - how many busy error responses have been returned so far
    • pool - database metrics
      • response - stats about how long it takes between issuing the db request and till the final moment all records are ready to be sent to the client
      • queue - stats about db requests at any given moment queued for the execution
      • cache - db cache response time and metrics
    • api - Web requests metrics, same structure as for the db pool metrics
    • url - metrics per url endpoints

    Individual sub-objects:

    • meter - Things that are measured as events / interval.
      • rmean: The average rate since the meter was started.
      • rcnt: The total of all values added to the meter.
      • rate: The rate of the meter since the last toJSON() call.
      • r1m: The rate of the meter biased towards the last 1 minute.
      • r5m: The rate of the meter biased towards the last 5 minutes.
      • r15m: The rate of the meter biased towards the last 15 minutes.
    • queue or histogram - Keeps a resevoir of statistically relevant values biased towards the last 5 minutes to explore their distribution
      • hmin: The lowest observed value.
      • mmax: The highest observed value.
      • hsum: The sum of all observed values.
      • hvar: The variance of all observed values.
      • hmean: The average of all observed values.
      • hdev: The standard deviation of all observed values.
      • hcnt: The number of observed values.
      • hmed: median, 50% of all values in the resevoir are at or below this value.
      • hp75: See median, 75% percentile.
      • hp95: See median, 95% percentile.
      • hp99: See median, 99% percentile.
      • hp999: See median, 99.9% percentile.


                "id": "",
                "ip": "",
                "mtime": 1417500027321,
                "ctime": 1416941754760,
                "type": "",
                "host": "",
                "pid": 25170,
                "instance": "i-d4c89eff",
                "worker": 27,
                "latency": 0,
                "cpus": 4,
                "mem": 15774367744,
                "rss_hmin": 66879488,
                "rss_hmax": 151891968,
                "rss_hsum": 2451506479104,
                "rss_hvar": 254812067010902.66,
                "rss_hmean": 118895507.98312236,
                "rss_hdev": 15962833.92793719,
                "rss_hcnt": 20619,
                "rss_hmed": 147644416,
                "rss_h75p": 149262336,
                "rss_h95p": 150834585.6,
                "rss_h99p": 151550033.92000002,
                "rss_h999p": 151886266.368,
                "heap_hmin": 25790920,
                "heap_hmax": 72316184,
                "heap_hsum": 1029889929504,
                "heap_hvar": 54374337037311.65,
                "heap_hmean": 49948587.68630874,
                "heap_hdev": 7373895.648658967,
                "heap_hcnt": 20619,
                "heap_hmed": 57480704,
                "heap_h75p": 61934254,
                "heap_h95p": 67752391.2,
                "heap_h99p": 70544797.92,
                "heap_h999p": 72315029.104,
                "avg_hmin": 0.04541015625,
                "avg_hmax": 0.06005859375,
                "avg_hsum": 938.234375,
                "avg_hvar": 4.491222722966496e-7,
                "avg_hmean": 0.04550338886463941,
                "avg_hdev": 0.0006701658543201448,
                "avg_hcnt": 20619,
                "avg_hmed": 0.04541015625,
                "avg_h75p": 0.04541015625,
                "avg_h95p": 0.04541015625,
                "avg_h99p": 0.05078125,
                "avg_h999p": 0.05997363281250001,
                "free_hmin": 128