node package manager

jsonreststores

A module to create full Json REST stores in minutes

JsonRestStores

JsonRestStores is the best way to create self-documenting REST stores that return JSON data (yes: self-documenting!). JsonRestStores is in RC3 status, and the API is 100% locked. Please (find and) file bugs and requests as issues against this repo.

Rundown of features:

  • DRY approach. Create complex applications keeping your code short and tight, without repeating yourself.
  • Down-to-earth. It does what developers actually need, using existing technologies.
  • Database-agnostic. You can either use a generic database connector, or implement the (simple!) data-manipulation methods yourself.
  • Protocol-agnostic. For now, only HTTP is implemented. However, with JsonRestStores the protocol used to make REST calls doesn't actually matter.
  • Schema based. Anything coming from the client will be validated and cast to the right type.
  • File uploads. It automatically supports file uploads, where the field will be the file's path
  • API-ready. Every store function can be called via API, which bypass permissions constraints
  • Tons of hooks. You can hook yourself to every step of the store processing process: afterValidate(), afterCheckPermissions(), afterDbOperation(), afterEverything()
  • Authentication hooks. Only implement things once, and keep authentication tight and right.
  • Mixin-based. You can add functionality easily.
  • Inheriting stores. You can easily derive a store from another one.
  • Nested data. Automatically load all child records and lookup records from other stores.
  • Simple error management. Errors can be chained up, or they can make the store return them to the client.
  • Great documentation. Every aspect of JsonRestStores is carefully explained and documented. Note that every day usage doesn't require knowlege of every single aspect of JsonRestStores.

JsonRestStores even comes with its own database layer mixin, SimpleDbLayerMixin, which will implement all of the important methods that will read, write and delete elements from a database. The mixin uses simpledblayer to access the database. For now, only MongoDb is supported but more will surely come.

Introduction to (JSON) REST stores

Here is an introduction on REST, JSON REST, and this module. If you are a veteran of REST stores, you can probably just skim through this.

Imagine that you have a web application with bookings, and users connected to each booking, and that you want to make this information available via a JSON Rest API. You would have to define the following routes in your application:

  • GET /bookings/
  • GET /bookings/:bookingId
  • PUT /bookings/:bookingId
  • POST /bookings/
  • DELETE /bookings:/bookingId

And then to access users for that booking:

  • GET /bookings/:bookingId/users/
  • GET /bookings/:bookingId/users/:userId
  • PUT /bookings/:bookingId/users/:userId
  • POST /bookings/:bookingId/users/
  • DELETE /bookings/:bookingId/users/:userId

And then -- again -- to get/post data from individual fields with one call:

  • GET /bookings/:bookingId/users/:userId/field1
  • PUT /bookings/:bookingId/users/:userId/field1
  • GET /bookings/:bookingId/users/:userId/field2
  • PUT /bookings/:bookingId/users/:userId/field2
  • GET /bookings/:bookingId/users/:userId/field3
  • PUT /bookings/:bookingId/users/:userId/field3

It sounds simple enough (although it's only two tables and it already looks rather boring). It gets tricky when you consider that:

  • You need to make sure that permissions are always carefully checked. For example, only users that are part of booking 1234 can GET /bookings/1234/users
  • When implementing GET /bookings/, you need to parse the URL in order to enable data filtering (for example, GET /bookings?dateFrom=1976-01-10&name=Tony will need to filter, on the database, all bookings made after the 10th of January 1976 by Tony).
  • When implementing GET /bookings/, you need to return the right Content-Range HTTP headers in your results so that the clients know what range they are getting.
  • When implementing GET /bookings/, you also need to make sure you take into account any Range header set by the client, which might only want to receive a subset of the data
  • When implementing /bookings/:bookingId/users/:userId, you need to make sure that the bookingId exists
  • With POST and PUT, you need to make sure that data is validated against some kind of schema, and return the appropriate errors if it's not.
  • With PUT, you need to consider the HTTP headers If-match and If-none-match to see if you can//should//must overwrite existing records
  • All unimplemented methods should return a 501 Unimplemented Method server response
  • You need to implement all of the routes for individual fields, and add permissions

This is only a short list of obvious things: there are many more to consider. The point is, when you make a store you should be focusing on the important parts (the data you gather and manipulate, and permission checking) rather than repetitive, boilerplate code.

With JsonRestStores, you can create JSON REST stores without ever worrying about any one of those things. You can concentrate on what really matters: your application's data, permissions and logic.

If you are new to REST and web stores, you will probably benefit by reading a couple of important articles. Understanding the concepts behind REST stores will make your life easier.

I suggest you read John Calcote's article about REST, PUT, POST, etc.. It's a fantastic read, and I realised that it was written by John, who is a long term colleague and fellow writer at Free Software Magazine, only after posting this link here!

You should also read my small summary of what a REST store actually provides.

At this stage, the stores are 100% compatible with Dojo's JsonRest as well as Sitepen's dstore.

Dependencies overview

Jsonreststores is a module that creates managed routes for you, and integrates very easily with existing ExpressJS applications.

Here is a list of modules used by JsonRestStores. You should be at least slightly familiar with them.

  • SimpleDeclare - Github. This module makes creation of constructor functions/classes a breeze. Using SimpleDeclare is a must when using JsonRestStores -- unless you want to drown in unreadable code.

  • SimpleSchema - Github. This module makes it easy (and I mean, really easy) to define a schema and validate/cast data against it. It's really simple to extend a schema as well. It's a no-fuss module.

  • Allhttperrors. A simple module that creats Error objects for all of the possible HTTP statuses.

  • SimpleDbLayer. The database layer used to access the database

Note that all of these modules are fully unit-tested, and are written and maintained by me.

It is recommended that you have a working knowledge of SimpleDbLayer (focusing on querying and automatic loading of children) before delving too deep into JsonRestStores, as JsonRestStores uses the same syntax to create queries and to define nested layers.

Your first Json REST store

Creating a store with JsonRestStores is very simple. Here is how you make a fully compliant store, ready to be added to your Express application:

  var JsonRestStores = require('jsonreststores'); // The main JsonRestStores module
  var Schema = require('simpleschema');  // The main schema module
  var SimpleDbLayer = require('simpledblayer');
  var MongoMixin = require('simpledblayer-mongo')
  var declare = require('simpledeclare');

  // The DbLayer constructor will be a mixin of SimpleDbLayer (base) and
  // MongoMixin (providing mongo-specific driver to SimpleDbLayer)
  var DbLayer = declare( SimpleDbLayer, MongoMixin, { db: db } );

  // Basic definition of the managers store
  var Managers = declare( JsonRestStores, JsonRestStores.HTTPMixin, JsonRestStores.SimpleDbLayerMixin, {

    // Constructor class for database-access objects, which in this case
    // will access MongoDNB collections
    DbLayer: DbLayer,

    schema: new Schema({
      name   : { type: 'string', trim: 60 },
      surname: { type: 'string', searchable: true, trim: 60 },
    }),

    storeName: 'managers',
    publicURL: '/managers/:id',

    handlePut: true,
    handlePost: true,
    handleGet: true,
    handleGetQuery: true,
    handleDelete: true,
  });

  var managers = new Managers();

  JsonRestStores.init();
  managers.protocolListen( 'HTTP', { app: app } );;

Note that since you will be mixing in JsonRestStores with JsonRestStores.HTTPMixin and JsonRestStores.SimpleDbLayerMixin for every single store you create (more about mixins shortly), you might decide to create the mixin once for all making the code less verbose:

var JsonRestStores = require('jsonreststores'); // The main JsonRestStores module
var Schema = require('simpleschema');  // The main schema module
var SimpleDbLayer = require('simpledblayer');
var MongoMixin = require('simpledblayer-mongo')
var declare = require('simpledeclare');

// The DbLayer constructor will be a mixin of SimpleDbLayer (base) and
// MongoMixin (providing mongo-specific driver to SimpleDbLayer)
var DbLayer = declare( SimpleDbLayer, MongoMixin, { db: db } );

// Mixin of JsonRestStores, JsonRestStores.HTTPMixin and JsonRestStores.SimpleDbLayerMixin
// with the DbLayer parameter already set
var Store = declare( JsonRestStores, JsonRestStores.HTTPMixin, JsonRestStores.SimpleDbLayerMixin, { DbLayer: DbLayer } );

// Basic definition of the managers store
var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});

var managers = new Managers();

JsonRestStores.init();
protocolListen( 'HTTP', { app: app } );

That's it: this is enough to add, to your Express application, a a full store which will handly properly all of the HTTP calls.

  • Managers is a new constructor function that inherits from JsonRestStores (the main constructor for JSON REST stores) mixed in with JsonRestStores.HTTPMixin (which ensures that protocolListen() works with the HTTP parameter, allowing clients to connect using HTTP) and JsonRestStores.SimpleDbLayerMixin (which gives JsonRestStores the ability to manipulate data on a database automatically).
  • DbLayer is a SimpleDbLayer constructor mixed in with MongoMixin, the MongoDB-specific layer for SimpleDbLayer. So, DbLayer will be used by Managers to manipulate MongoDB collections.
  • schema is an object of type Schema that will define what's acceptable in a REST call.
  • publicURL is the URL the store is reachable at. The last one ID is the most important one: the last ID in publicURL (in this case it's also the only one: id) defines which field, within your schema, will be used as the record ID when performing a PUT and a GET (both of which require a specific ID to function).
  • storeName (mandatory) needs to be a unique name for your store.
  • handleXXX are attributes which will define how your store will behave. If you have handlePut: false and a client tries to PUT, they will receive an NotImplemented HTTP error.
  • protocolListen( 'HTTP', { app: app } ) creates the right Express routes to receive HTTP connections for the GET, PUT, POST and DELETE methods.
  • JsonRestStores.init() should always be run once you have declared all of your stores. This function will run the initialisation code necessary to make nested stores work properly.

JsonRestStores is very unobtrusive of your Express application. In order to make everything work, you can just:

  • Generate a new ExpressJS application
  • Connect to the database
  • Define the stores using the code above.

This is how the stock express code would change to implement the store above (please note that this is mostly code autogenerated when you generate an Express application):

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

var routes = require('./routes/index');
var users = require('./routes/users');

var app = express();

// CHANGED: ADDED AN INCLUDE `dbConnect`
var dbConnect = require('./dbConnect');

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');

// uncomment after placing your favicon in /public
//app.use(favicon(__dirname + '/public/favicon.ico'));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));

app.use('/', routes);
app.use('/users', users);

// CHANGED: Added call to dbConnect, and waiting for the db
dbConnect( function( db ){

  // ******************************************************
  // ********** CUSTOM CODE HERE **************************
  // ******************************************************

  var JsonRestStores = require('jsonreststores'); // The main JsonRestStores module
  var Schema = require('simpleschema');  // The main schema module
  var SimpleDbLayer = require('simpledblayer');
  var MongoMixin = require('simpledblayer-mongo')
  var declare = require('simpledeclare');

  // The DbLayer constructor will be a mixin of SimpleDbLayer (base) and
  // MongoMixin (providing mongo-specific driver to SimpleDbLayer)
  var DbLayer = declare( SimpleDbLayer, MongoMixin, { db: db } );

  // Common mixin of JsonRestStores, JsonRestStores.SimpleDbLayerMixin and the DbLayer parameter
  // already set

  var Store = declare( JsonRestStores, JsonRestStores.SimpleDbLayerMixin, { DbLayer: DbLayer } );

  var Managers = declare( Store, {

    schema: new Schema({
      name   : { type: 'string', trim: 60 },
      surname: { type: 'string', searchable: true, trim: 60 },
    }),

    storeName: 'managers',
    publicURL: '/managers/:id',

    handlePut: true,
    handlePost: true,
    handleGet: true,
    handleGetQuery: true,
    handleDelete: true,
  });
  var managers = new Managers();

  JsonRestStores.init();
  managers.protocolListen( 'HTTP', { app: app } );;

  // ******************************************************
  // ********** END OF CUSTOM CODE      *******************
  // ******************************************************

  // catch 404 and forward to error handler
  app.use(function(req, res, next) {
      var err = new Error('Not Found');
      err.status = 404;
      next(err);
  });

  // error handlers

  // development error handler
  // will print stacktrace
  if (app.get('env') === 'development') {
      app.use(function(err, req, res, next) {
          res.status(err.status || 500);
          res.render('error', {
              message: err.message,
              error: err
          });
      });
  }

  // production error handler
  // no stacktraces leaked to user
  app.use(function(err, req, res, next) {
      res.status(err.status || 500);
      res.render('error', {
          message: err.message,
          error: {}
      });
  });


});
module.exports = app;

The dbConnect.js file is simply something that will connect to the database and all the callback with the db instance:

var mongo = require('mongodb');
exports = module.exports = function( done ){
  // Connect to the database
  mongo.MongoClient.connect('mongodb://localhost/storeTesting', {}, function( err, db ){
    if( err ){
      console.error( "Error connecting to the database: ", err );
      process.exit( 1 );
    }
    return done( db );
  });
}

This store is actually fully live and working! It will manipulate your database and will respond to any HTTP requests appropriately.

A bit of testing with curl:

$ curl -i -XGET  http://localhost:3000/managers/
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 2
ETag: "223132457"
Date: Mon, 02 Dec 2013 02:20:21 GMT
Connection: keep-alive

[]

curl -i -X POST -d "name=Tony&surname=Mobily"  http://localhost:3000/managers/
HTTP/1.1 201 Created
X-Powered-By: Express
Location: /managers/2
Content-Type: application/json; charset=utf-8
Content-Length: 54
Date: Mon, 02 Dec 2013 02:21:17 GMT
Connection: keep-alive

{
  "id": 2,
  "name": "Tony",
  "surname": "Mobily"
}

curl -i -X POST -d "name=Chiara&surname=Mobily"  http://localhost:3000/managers/
HTTP/1.1 201 Created
X-Powered-By: Express
Location: /managers/4
Content-Type: application/json; charset=utf-8
Content-Length: 54
Date: Mon, 02 Dec 2013 02:21:17 GMT
Connection: keep-alive

{
  "id": 4,
  "name": "Chiara",
  "surname": "Mobily"
}

$ curl -i -GET  http://localhost:3000/managers/
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 136
ETag: "1058662527"
Date: Mon, 02 Dec 2013 02:22:29 GMT
Connection: keep-alive

[
  {
    "id": 2,
    "name": "Tony",
    "surname": "Mobily"
  },
  {
    "id": 4,
    "name": "Chiara",
    "surname": "Mobily"
  }
]


$ curl -i -GET  http://localhost:3000/managers/?surname=mobily
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 136
ETag: "15729456527"
Date: Mon, 02 Dec 2013 02:22:35 GMT
Connection: keep-alive

[
  {
    "id": 2,
    "name": "Tony",
    "surname": "Mobily"
  },
  {
    "id": 4,
    "name": "Chiara",
    "surname": "Mobily"
  }
]

$ curl -i -GET  http://localhost:3000/managers/?surname=fabbietti
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 2
ETag: "1455673456"
Date: Mon, 02 Dec 2013 02:22:42 GMT
Connection: keep-alive

[]

$ curl -i -X PUT -d "name=Merc&surname=Mobily"  http://localhost:3000/managers/2
HTTP/1.1 200 OK
X-Powered-By: Express
Location: /managers/2
Content-Type: application/json; charset=utf-8
Content-Length: 54
Date: Mon, 02 Dec 2013 02:23:50 GMT
Connection: keep-alive

{
  "id": 2,
  "name": "Merc",
  "surname": "Mobily"
}

$ curl -i -XGET  http://localhost:3000/managers/2
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 54
ETag: "-264833935"
Date: Mon, 02 Dec 2013 02:24:58 GMT
Connection: keep-alive

{
  "id": 2,
  "name": "Merc",
  "surname": "Mobily"
}

It all works!

Mixins are a powerful way to specialise a generic constructor.

For example, the constructor JsonRestStores on its own is hardly useful as it doesn't allow you to wait for request and actually serve them. On its own, calling protocolListen( 'HTTP', { app: app } ); will fail, because protocolListen() will attempt to run the method protocolListenHTTP( { app: app } ), which isn't defined.

The good news is that the mixin JsonRestStores.HTTPMixin implements protocolListenHTTP() (as well as the corresponding protocolSendHTTP()), which makes protocolListen( 'HTTP', { app: app } ); work.

You can mix a store with as many protocol mixins as you like (although at this stage only HTTP is currently implemented).

HTTPMixin is only one piece of the puzzle: on its own, it's not enough. JsonRestStores mixed with HTTPMixin creates JSON REST stores with the following data-manipulation methods left unimplemented (they will throw an error if they are run):

  • implementFetchOne: function( request, cb )
  • implementInsert: function( request, forceId, cb )
  • implementUpdate: function( request, deleteUnsetFields, cb )
  • implementDelete: function( request, cb )
  • implementQuery: function( request, next )
  • implementReposition: function( doc, where, beforeId, cb )

Implementing these methods is important to tell JsonRestStores how to actualy manipulate the store's data. You can do it yourself by hand, but if you want to save a few hundred hours, this is exactly what JsonRestStores.SimpleDbLayerMixin does: it's a mixin that enriches the basic JsonRestStore objects with all of the methods listed above, using a database as data storage.

So when you write:

var Managers = declare( JsonRestStores, JsonRestStores.HTTPMixin, JsonRestStores.SimpleDbLayerMixin, {

You are creating a constructor, Managers, mixing in the prototypes of JsonRestStores (the generic, unspecialised constructor for Json REST stores), HTTPMixin (which makes protocolListen( 'HTTP', { app: app } ); work) and JsonRestStores.SimpleDbLayerMixin (which provides the implementations of implementFetchOne(), implementInsert(), etc. to manipulate data).

SimpleDbLayerMixin will use the DbLayer attribute of the store as the constructor used to create "table" objects, and will manipulate data with them.

DbLayer itself is created using the same pattern as Managers.

SimpleDbLayer on its own is useless: it creates a DB layer with the following methods left unimplemented:

  • select( filters, options, cb )
  • update( conditions, updateObject, options, cb )
  • insert( record, options, cb )
  • delete( conditions, options, cb )
  • reposition: function( record, where, beforeId, cb )

The implementation will obviously depend on the database layer. So, when you type:

var DbLayer = declare( SimpleDbLayer, MongoMixin );

You are creating a constructor, DbLayer, that is the mixin of SimpleDbLayer (where select() update() etc. are not implemented) and MongoMixin (which implements select(), update() etc. using MongoDB as the database layer).

This is the beauty of mixins: they implement the missing methods in a generic, unspecialised constructor.

When you define a store like this:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  hardLimitOnQueries: 50,
});

managers.protocolListen( 'HTTP', { app: app } );;

The publicURL is used to:

  • Add id: { type: id } to the schema automatically. This is done so that you don't have to do the grunt work of defining id in the schema if they are already in publicURL.
  • Create the paramIds array for the store. In this case, paramIds will be [ 'id' ].

So, you could reach the same goal without publicURL:

var Managers = declare( Store, {

  schema: new Schema({
    id     : { type: 'id' },
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', trim: 60 },
  }),

  storeName: 'managers',
  paramIds: [ 'id' ],

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  hardLimitOnQueries: 50,
});

var managers = new Managers();
JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );; // This will throw()

Note that:

  • The id parameter had to be defined in the schema
  • The paramIds array had to be defined by hand
  • managers.protocolListen( 'HTTP', { app: app } ); can't be used as the public URL is not there

This pattern is much more verbose, and it doesn't allow the store to be placed online with protocolListen().

In any case, the property idProperty is set as last element of paramIds; in this example, it is id.

In the documentation, I will often refers to paramIds, which is an array of element in the schema which match the ones in the route. However, in all examples I will declare stores using the "shortened" version.

Nested stores

Stores are never "flat" as such: you have workspaces, and then you have users who "belong" to a workspace. Here is how you create a "nested" store:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});
var managers = new Managers();

var ManagersCars = declare( Store, {

  schema: new Schema({
    make     : { type: 'string', trim: 60, required: true },
    model    : { type: 'string', trim: 60, required: true },
  }),

  storeName: 'managersCars',
  publicURL: '/managers/:managerId/cars/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});
var managersCars = new ManagersCars();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;
managersCars.protocolListen( 'HTTP', { app: app } );;

You have two stores: one is the simple managers store with a list of names and surname; the other one is the managersCars store: note how the URL for managersCars includes managerId.

The managersCars store will will respond to GET /managers/2222/cars/3333 (to fetch car 3333 of manager 2222), GET /workspace/2222/users (to get all cars of manager 2222), and so on.

Remember that in managersCars remote queries will always honour the filter on managerId, both in queries (GET without an id as last parameter) and single-record operations (GET with a specific id). This happens thanks to SimpleDbLayerMixin (more about this later).

If you have two nested tables like the ones shown above, you might want to be able to look up fields automatically. JsonRestStores allows you to to so using the nested property.

For example:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  nested: [
    {
      type: 'multiple',
      store: 'managersCars',
      join: { managerId: 'id' },
    }
  ],

});
var managers = new Managers();

var ManagersCars = declare( Store, {

  schema: new Schema({
    make     : { type: 'string', trim: 60, required: true },
    model    : { type: 'string', trim: 60, required: true },
  }),

  storeName: 'managersCars',
  publicURL: '/managers/:managerId/cars/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  nested: [
    {
      type: 'lookup',
      localField: 'managerId',
      store: 'managers',
    }
  ],
});
var managersCars = new ManagersCars();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;
managersCars.protocolListen( 'HTTP', { app: app } );;

This is an example where using JsonRestStores really shines: when you use GET to fetch a manager, the object's attribute manager._children.managersCars will be an array of all cars joined to that manager. Also, when you use GET to fetch a car, the object's attribute car._children.managerId will be an object representing the correct manager. This is immensely useful in web applications, as it saves tons of HTTP calls for lookups. NOTE: The child's store's extrapolateDoc() and prepareBeforeSend() methods will be called on the child's data (as you would expect). Keep in mind that when those methods are being called on bested data, request.nested will be set to true.

Note that in nested objects the store names are passed as strings, rather than objects; this is important: in this very example, you can see store: 'managersCars', as a nested store, but at that point managersCars hasn't been declared yet. The store names in nested will be resolved later, by the JsonRestStores.init() function, using JsonRestStores' registry for the lookup. This is why it's crucial to run JsonRestStores.init() only when all of your stores have been created (and are therefore in the registry).

Fetching of nested data is achieved by SimpleDbLayerMixin by using SimpleDbLayer's nesting abilities, which you should check out. If you do check it out, you will see strong similarities between JsonRestStores' nested parameter and SimpleDbLayer. If you have used nested parameters in SimpleDbLayer, then you easily see that JsonRestStores will simply make sure that the required attribute for nested entries are there; for each nested entry it will add a layer property (based on the store's own collectionName) and a layerField property (based on the store's own idProperty).

You can change the name of the property in _children by adding a prop parameter to nested:

nested: [
  {
    type: 'lookup',
    localField: 'userId',
    store: 'usersPrivateInfo',
    prop: 'usersPrivateInfo'
  },

  {
    type: 'lookup',
    localField: 'userId',
    store: 'usersContactInfo',
    prop: 'usersContactInfo'
  }

],

This proves useful when there is a clash. In this case, the userId field is used twice: once to pull information from usersPrivateInfo and again to pull information from usersContactInfo.

Naming conventions for stores

It's important to be consistent in naming conventions while creating stores. In this case, code is clearer than a thousand bullet points:

var Managers = declare( Store, {

  schema: new Schema({
    // ...
  });

  publicUrl: '/managers/:id',

  storeName: `managers`
  // ...
}
var managers = new Managers();

var People = declare( Store, {

  schema: new Schema({
    // ...
  });

  publicUrl: '/people/:id',

  storeName: `people`
  // ...
}
var people = new People();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;
people.protocolListen( 'HTTP', { app: app } );;
  • Store names anywhere lowercase and are plural (they are collections representing multiple entries)
  • Irregulars (Person => People) are a fact of life
  • Store constructors (derived from Store) are in capital letters (as constructors, they should be)
  • Store variables are in small letters (they are normal object variables)
  • storeName attributes are in small letters (to follow the lead of variables)
  • URL are in small letters (following the stores' names, since everybody knows that /Capital/Urls/Are/Silly)
var Managers = declare( Store, {

  schema: new Schema({
    // ...
  });

  publicUrl: '/managers/:id',

  storeName: `managers`
  // ...
}
var managers = new Managers();

var ManagersCars = declare( Store, {

  schema: new Schema({
    // ...
  });

  publicUrl: '/managers/:managerId/cars/:id',

  // ...
  storeName: `managersCars`
  // ...
}
var managerCars = new ManagersCars();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;
managerCars.protocolListen( 'HTTP', { app: app } );;
  • The nested store's name starts with the parent store's name (managers) keeping pluralisation
  • The URL is in small letters, starting with the URL of the parent store

Permissions

By default, everything is allowed: stores allow pretty much anything and anything; anybody can DELETE, PUT, POST, etc. Furtunately, JsonRestStores allows you to decide exactly what is allowed and what isn't, by overriding specific methods.

Every method runs the method checkPermissions() before continuing. If everything went fine, checkPermissions() will call the callback with true: cb( null, true ); otherwise, to fail, cb( null, false ).

The checkPermissions() method has the following signature:

checkPermissions: function( request, method, cb )

Here:

  • request. It is the request object
  • method. It can be post, put, get, getQuery, delete

Here is an example of a store only allowing deletion only to specific admin users:

// The basic schema for the WorkspaceUsers table
var WorkspaceUsers = declare( Store, {

  schema: new Schema({
    email     :  { type: 'string', trim: 128, searchable: true, sortable: true  },
    name      :  { type: 'string', trim: 60, searchable: true, sortable: true  },
  }),

  storeName:  'WorkspaceUsers',
  publicURL: '/workspaces/:workspaceId/users/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  checkPermissions: function( request, method, cb ){

    // This will only affect `delete` methods
    if( method !== 'delete' ) return cb( null, true );

    // User is logged in: all good
    if( request._req.session.user ){
      cb( null, true );

    // User is not logged in: fail!
    } else {
      cb( null, false, "Must login" );
    }
  },

});
var workspaceUsers = new WorkspaceUsers();
workspaceUsers.protocolListen( 'HTTP', { app: app } );;

Permission checking can be as simple, or as complex, as you need it to be.

Note that if your store is derived from another one, and you want to preserve your master store's permission model, you can run this.inheritedAsync(arguments) like so:

  checkPermissions: function f( request, method, cb ){

    this.inheritedAsync( f, arguments, function( err, granted, message ) {
      if( err ) return cb( err, false );

      // The parent's checkPermissions() method failed: this will honour that fail
      if( ! granted) return cb( null, false, message );

      // This will only affect `delete` methods
      if( method !== 'delete' ) return cb( null, true );

      // User is admin (id: 1 )
      if( request._req.session.user === 1){
        cb( null, true );

      // User is not logged in: fail!
      } else {
        cb( null, false, "Must login" );
      }
   }
 },

This will ensure that the inherited checkPermissionsDelete() method is called and followed, and then further checks are carried on.

Please note that checkPermissions() is only run for local requests, with remote set to false. All requests coming from APIs will ignore the method.

Single fields

Advanced applications allow users to make a PUT call as soon as they leave a field, rather than on submit of the whole form. To facilitate this, JsonRestStores implements "single fields", where get and put calls will only affect a single field -- and yet you get all of the authentication bonus from

So do that, all you have to do is mark fields as singleField in your schema -- that's it! For example:

// Basic definition of the managers store
var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60, singleField: true },
    surname: { type: 'string', searchable: true, trim: 60, singleField: true },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});

var managers = new Managers();

JsonRestStores.init();
protocolListen( 'HTTP', { app: app } );

In this case you are able to GET and PUT the field name, marked as singleField, individually. For example HTTPMixin will create the following routes:

  • GET /managers/:id/name
  • PUT /managers/:id/name
  • GET /managers/:id/surname
  • PUT /managers/:id/surname

Note that all of the permissions checks and hooks will be called as per normal for put and get requests. If you need to differenciate in your code, you can simply check for the request.options.field property.

Unique fields

You can add the unique attribute to a field in the schema; this will ensure that PUT and POST operations will never allow duplicate data within the same store.

For example:

// Basic definition of the managers store
var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60, singleField: true },
    surname: { type: 'string', searchable: true, trim: 60, singleField: true },
    email  : { type: 'string', searchable: true, unique: 'true', trim: 60, singleField: true },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});

var managers = new Managers();

JsonRestStores.init();
protocolListen( 'HTTP', { app: app } );

In this case, JsonRestStores will ensure that email is unique. Note that you should also do this at index-level, and that JsonRestStores does a soft check. This means that it won't handle race conditions where two concurrent calls might end up checking for the duplicate at the same time, and therefore allowing a duplicate record. So, generally speaking, if it's crucial that your app doesn't have a duplicate you will need to enforce this at index-level.

Remember that all fields marked as unique must also be declared as searchable.

Automatic lookup

When you have a nested store, you would normally check if the intermediate ID in the URL or in the body actually resolves to an existing record. It's also often important, when checking for store permissions, to access that record.

Imagine that you have this store:

var ManagersCars = declare( Store, {

  schema: new Schema({
    make     : { type: 'string', trim: 60, required: true },
    model    : { type: 'string', trim: 60, required: true },
  }),

  storeName: 'managersCars',
  publicURL: '/managers/:managerId/cars/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  autoLookup: {
    managerId: 'managers'
  },

  nested: [
    {
      type: 'lookup',
      localField: 'managerId',
      store: 'managers',
    }
  ],
});
var managersCars = new ManagersCars();

Note that the store has an extra autoLookup property, where:

  • the key is the name of the field, in params, for which automatic lookup will happen
  • the value is the name of the store where the lookup will happen

Note that:

  • The lookup is always carried out using the looked up store's ID, which needs to correspond to the value in the URL param or body
  • If lookup failes, the store returns a NotFound error
  • If lookup is successful, the looked up store's data is available under request.lookup.managerId. To get the record, JsonRestStore uses the store's table primitives; so, none of the normal hooks are called. Basically, in request.lookup.managerId the record is fetched "as is".

A typical use case is to check, in a PUT for example, that only the logged in manager can change the car record:

checkPermissions: function f( request, method, cb ){

  // First of all: user MUST be logged in
  if( ! request.session.userId ) return cb( null, false );

  // Get is always allowed
  if( method === 'get' ) return cb( null, true );

  // managerId is in the `autoLookup` list, hence this will work
  if( request.lookup.managerId.id == request.session.userId ) return cb( null, true );

  // Denied in other cases
  return cb( null, false );
},

The position attribute

When creating a store, you can set the position parameter as true. For example:

    var Managers= declare( Store, {
 
      schema: new Schema({
        name   : { type: 'string', trim: 60 },
        surname: { type: 'string', trim: 60 },
      }),
 
      position: true,
 
      storeName: 'managers',
      publicURL: '/managers/:id',
 
      handlePut: true,
      handlePost: true,
      handleGet: true,
      handleGetQuery: true,
      handleDelete: true,
    });
    var managers = new Managers();
 
    JsonRestStores.init();
    managers.protocolListen( 'HTTP', { app: app } );;

The position attribute means that PUT and POST calls will have to honour positioning based on options.putBefore and options.putDefaultPosition.

The main use of position: true is that when no sorting is requested by the client, the items will be ordered correctly depending on their "natural" positioning.

Positioning will keep into account the store's paramIds when applying positioning. This means that if you have a store like this:

    var Managers= declare( Store, {
 
      schema: new Schema({
        workspaceId: { type: 'id' },
        name       : { type: 'string', trim: 60 },
        surname    : { type: 'string', trim: 60 },
      }),
 
      position: true,
 
      storeName: 'managers',
      publicURL: '/workspaces/:workspaceId/managers/:id',
 
      handlePut: true,
      handlePost: true,
      handleGet: true,
      handleGetQuery: true,
      handleDelete: true,
    });
    var managers = new Managers();

Positioning will have to take into account workspaceId when repositioning: if an user in workspace A repositions an item, it mustn't affect positioning in workspace B. Basically, when doing positioning, paramIds define the domain of repositioning (in this case, elements with matching workspaceIds will belong to the same domain).

File uploads

JsonRestStores in itself doesn't manage file uploads. The reason is simple: file uploads are a very protocol-specific feature. For example, you can decide to upload files, along with your form, by using multipart-formdata as your Content-type, to instruct the browser to encode the information (before sending it to the sever) in a specific way that accommodates multiple file uploads. However, this is a separate issue to the store itself, which will only ever store the file's name, rather than the raw data.

Basicaly, all of the features to uplaod files in JsonRestStores are packed in HTTPMixin. This is how you do it:

var VideoResources = declare( [ Store ], {

  schema: new HotSchema({
    fileName         : { default: '', type: 'string', protected: true, singleField: true, trim: 128, searchable: false },
    description      : { default: '', type: 'string', trim: 1024, searchable: false },
  }),

  uploadFields: {
    fileName: {
      destination: '/home/www/node/deployed/node_modules/wonder-server/public/resources',
    },
  },

  storeName:  'videoResources',

  publicURL: '/videosResources/:id',
  hotExpose: true,

  handlePut: true,
  handlePost: true,
  handleGet: true,

});
stores.videoTemplatesResources = new VideoTemplatesResources();

In this store:

  • The store has an uploadFields attribute, which lists the fields that will represent file paths resulting from the successful upload
  • The field is marked as protected; remember that the field represents the file's path, and that you don't want users to directly change it.

For this store, HTTPMixin will do the following:

  • add a middleware in stores with uploadFields, adding the ability to parse multipart/formdata input from the client
  • save the files in the required location
  • set req.body.fileName as the file's path, and req.bodyComputed.fileName to true.

JsonRestStores will simply see values in req.body, blissfully unaware of the work done to parse the requests' input and the work to automatically populate req.body with the form values as well as the file paths.

The fact that the fields are protected meand that you are not forced to re-upload files every time you submit the form: if the values are set (thanks to an upload), they will change; if they are not set, they will be left "as is".

On the backend, JsonRestStores uses multer, a powerful multipart-parsing module. However, this is basically transparent to JsonRestStores and to developers, except some familiarity with the configuration functions.

The store above only covers a limited use case. Remembering that the file object has the following fields:

  • fieldname - Field name specified in the form
  • originalname - Name of the file on the user's computer
  • encoding - Encoding type of the file
  • mimetype - Mime type of the file
  • size - Size of the file in bytes
  • destination - The folder to which the file has been saved
  • filename - The name of the file within the destination
  • path - The full path to the uploaded file

In order to configure file uploads, you can set three attributes when you declare you store:

  • uploadFilter -- to filter incoming files based on their names or fieldName
  • uploadLimits -- to set some upload limits, after which JsonRestStores will throw a UnprocessableEntity error.
  • uploadFields -- it can have two properties: destination and fileName.

Here are these options in detail:

This allows you to filter files based on their names and fieldNames. You only have limited amount of information for each file:

{ fieldname: 'fileName',
  originalname: 'test.mp4',
  encoding: '7bit',
  mimetype: 'video/mp4' }

This is especially useful if you want to check for swearwords in the file name, or the file type. You can throw and error if the file type doesn't correspond to what you were expecting:

uploadFilter: function( req, file, cb ){
  if( file.mimetype != 'video/mp4') return cb( null, false );
  cb( null, true );
},

This allows you to set specific download limits. The list comes straight from busbuy, on which JsonRestStores is based:

  • fieldNameSize -- Max field name size (in bytes) (Default: 100 bytes).
  • fieldSize -- Max field value size (in bytes) (Default: 1MB).
  • fields -- Max number of non-file fields (Default: Infinity).
  • fileSize -- For multipart forms, the max file size (in bytes) (Default: Infinity).
  • files -- For multipart forms, the max number of file fields (Default: Infinity).
  • parts -- For multipart forms, the max number of parts (fields + files) (Default: Infinity).
  • headerPairs -- For multipart forms, the max number of header key=>value pairs to parse Default: 2000 (same as node's http).

For example, the most typical use case would be:

uploadLimits: {
  fileSize: 50000000 // 50 Mb
},

This is the heart of the upload abilities of JsonRestStores.

It accepts two parameters:

This parameter is mandatory, and defines where the files connected to that field will be stored. It can either be a string, or a function with the following signature: function( req, file, cb ). It will need to call the callback cb with cb( null, FULL_PATH ). For example:

uploadFields: {

  avatarImage: {
    destination: function (req, file, cb) {
      // This can depend on req, or file's attribute
      cb( null, '/tmp/my-uploads');
    }
  }

},

It's a function that will determine the file name. By default, it will be a function that works out the file name from the field name and either the record's ID (for PUT requests, where the ID is known) or a random string (for POST requests, where the ID is not known).

If you don't set it, it will be:

uploadFields: {

  avatarImage: {
    destination: function (req, file, cb) {
      // This can depend on req, or file's attribute
      cb( null, '/tmp/my-uploads');
    },

    // If the ID is there (that's the case with a PUT), then use it. Otherwise,
    // simply generate a random string
    fileName: function( req, file, cb ){
      var id = req.params[ this.idProperty ];
      if( ! id ) id = crypto.randomBytes( 20 ).toString( 'hex' );

      // That's it
      return cb( null, file.fieldname + '_' + id );
    }
  }
},

The default function works fine in most cases. However, you may want to change it.

By default, when there is an error, the file upload module multer will throw and error. It's much better to encapsulate those errors in HTTP errors. This is what uploadErrorProcessor does. By default, it's defined as follow (although you can definitely change it if needed):

uploadErrorProcessor: function( err, next ){
  var ReturnedError = new this.UnprocessableEntityError( (err.field ? err.field : '' ) + ": " + err.message );
  ReturnedError.OriginalError = err;
  return next( ReturnedError );
},

deleteAfterGetQuery: automatic deletion of records after retrieval

If your store has the deleteAfterGetQuery set to true, it will automatically delete any elements fetched with a getQuery method (that is, a GET run without the final id, and therefore fetching elements based on a filter). This is done by forcing options.delete to true (unless it was otherwise defined) in makeGetQuery() .

This is especially useful when a store has, for example, a set of records that need to be retrieved by a user only once (like message queues).

hardLimitOnQueries: limit the number of records

If your store has the hardLimitOnQueries set, any getQuery method (that is, a GET without the final id, and therefore fetching elements based on a filter) will never return more than hardLimitOnQueries results (unless you are using JsonRestStore's API, and manually set options.skipHardLimitOnQueries to true).

Customise search rules

In the previous examples, I explained how marking a field as searchable in the schema has the effect of making it searchable in queries:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});

var managers = new Managers();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;

If you query the store with http://localhost:3000/managers/?surname=mobily, it will only return elements where the surname field matches.

In JsonRestStores you actually define what fields are acceptable as filters with the parameter onlineSearchSchema, which is defined exactly as a schema. So, writing this is equivalent to the code just above:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  onlineSearchSchema: new Schema( {
    surname: { type: 'string', trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});

var managers = new Managers();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;

If onlineSearchSchema is not defined, JsonRestStores will create one based on your main schema by doing a shallow copy, excluding paramIds (which means that, in this case, id is not added automatically to onlineSearchSchema, which is most likely what you want).

If you define your own onlineSearchSchema, you are able to decide exactly how you want to filter the values. For example you could define a different default, or trim value, etc. However, in common applications you can probably live with the auto-generated onlineSearchSchema.

You can decide how the elements in onlineSearchSchema will be turned into a search with the queryConditions parameter.

queryConditions is normally automatically generated for you if it's missing. So, not passing it is the same as writing:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  onlineSearchSchema: new Schema( {
    surname: { type: 'string', trim: 60 },
  }),

  queryConditions: {
    type: 'eq',
    args: [ 'surname', '#surname#']
  },

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,
});

var managers = new Managers();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;

Basically, queryConditions is automatically generated with the name field in the database that matches the name entry in the query string (that's what #name# stands for).

Remember that here:

queryConditions: {
  type: 'eq',
  args: [ 'surname', '#surname#']
},

surname refers to the database field surname, whereas #surname# refers to the query string's surname element (which is cast thanks to onlineSearchSchema.

If you had defined both name and surname as searchable, queryConditions would have been generated as:

queryConditions: {
    type: 'and',
    args: [
      { type: 'eq', args: [ 'name', '#name#' ] },
      { type: 'eq', args: [ 'surname', '#surname#' ]
    ]
  },

Basically, both name and surname need to match their respective values in the query string. To know more about the syntax of queryConditions, please have a look at the conditions object in SimpleDbLayer.

You can effectively create any kind of query based on the passed parameter. For exampe, you could create a searchAll field like this:

onlineSearchSchema: new Schema( {
  searchAll: { type: 'string', trim: 60 },
}),

queryConditions: {
  type: 'or',
  ifDefined: 'searchAll',
  args: [
    { type: 'startsWith', args: [ 'number', '#searchAll#' ] },
    { type: 'startsWith', args: [ 'firstName', '#searchAll#' ] },
    { type: 'startsWith', args: [ 'lastName', '#searchAll#' ] },
  ]
},

This example highlights that onlineSearchSchema fields don't have to match existing fields in the schema: they can be anything, which is then used as a #field# value in queryConditions. They are basically values that will be used when constructing the actual query in queryConditions.

Keep in mind that the syntax of JsonRestStore's queryConditions is identical to the syntax of the conditions object in SimpleDbLayer, with the following extras:

  1. Value resolution

In JsonRestStores, when a value is in the format #something#, that something will be replaced by the value in the corresponding value in the query string when making queries. If something is not passed in the query string, that section of the query is ignored.

  1. ifDefined to filter out chunks

You can have the attribute ifDefined set as a value in queryConditions: in this case, that section of the query will only be evaluated if the corresponding value in the query string is defined.

For example, you could define queryConditions as:

queryConditions: {
  type: 'and',
  args: [

    {
      type: 'and', ifDefined: 'surname', args: [
        { type: 'startsWith', args: [ 'surname', '#surname#' ] },
        { type: 'eq', args: [ 'active', true ] },
      ]
    },

    {
      type: 'startsWith', args: [ 'name', '#name#']
    }
  ]
},

The strings #surname# and #name# are translated into their corresponding values in the query string. The ifDefined means that that whole section of the query will be ignored unless surname is passed to the query string. The comparison operators, which were eq in the generated queryConditions, are now much more useful startsWith.

  1. if to filter out chunks with a function

If ifDefined is't quite enough, you can use the more powerful if

queryConditions: {
  { 'if': function( request ) { return !request.isAdmin }, type: 'eq', args: [ 'hidden', false ]},
}

The condition will only apply if the statement returns a truly value. (It's up to the application to set request.admin beforehand). The scope of the function is the store itself.

  1. The immensely useful each statement

You will often want to break down a string into words, and then use those individual words in your search criteria. This is what each is for. This will be a much more powerful implementation of searchAll:

onlineSearchSchema: new Schema( {
  searchAll: { type: 'string', trim: 60 },
  userId: { type: 'id' }
}),

queryConditions: {
  type: 'or',
  ifDefined: 'searchAll',
  args: [
    { type: 'startsWith', args: [ 'number', '#searchAll#' ] },
    { type: 'startsWith', args: [ 'firstName', '#searchAll#' ] },
    { type: 'startsWith', args: [ 'lastName', '#searchAll#' ] },
  ]
},

var initialQueryConditions = {
  type: 'and',
  args: [

    // First: filter by userId if passed
    { type: 'eq', args: [ 'userId', '#userId#'] },

    // Second: must satisfy _each_ condition based on the breakdown of #searchAll#, space-separated
    { type: 'each', value: 'searchAll', as: 'searchAllEach', linkType: 'and', separator: ' ', args: [
      { type: 'or', args: [
        { type: 'contains', args: [ 'title', '#searchAllEach#' ] },
        { type: 'contains', args: [ 'videosTags.tagName', '#searchAllEach#' ] },
      ]},
    ]},
  ]
};

Note that it comes with defaults, so that the each line could have looked like this:

{ type: 'each', value: 'searchAll', args: [

Since linkType defaults to and, the separator defaults to a space, and the as field defaults to the name of the value with Each added at the end.

queryConditions is basically a very powerful engine that will generate the queries for you based on what parameters were passed.

Thanks to queryConditions you can define any kind of query you like. The good new is that you can also search in children tables that are defined as nested in the store definitions.

For example:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', searchable: true, trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  onlineSearchSchema: new HotSchema({
    name    : { type: 'string', trim: 60 },
    surname : { type: 'string', trim: 60 },
    carInfo : { type: 'string', trim: 30 },
  }),

  queryConditions: {
    type: 'and',
    args: [

      {
        type: 'startsWith', args: [ 'surname', '#surname#']
      },

      {
        type: 'or',
        ifDefined: 'carInfo',
        args: [
          { type: 'startsWith', args: [ 'managersCars.make', '#carInfo#' ] },
          { type: 'startsWith', args: [ 'managersCars.model','#carInfo#' ] },
        ]
      }
    ]
  },

  nested: [
    {
      type: 'multiple',
      store: 'managersCars',
      join: { managerId: 'id' },
    }
  ],

});
var managers = new Managers();

var ManagersCars = declare( Store, {

  schema: new Schema({
    make     : { type: 'string', trim: 60, searchable: true, required: true },
    model    : { type: 'string', trim: 60, searchable: true, required: true },
  }),

  onlineSearchSchema: new HotSchema({
    make       : { type: 'string', trim: 60 },
    model      : { type: 'string', trim: 60 },
    managerInfo: { type: 'string', trim: 60 }
  }),

  queryConditions: {
    type: 'and',
    args: [

      { type: 'startsWith', args: [ 'make', '#make#'] },

      { type: 'startsWith', args: [ 'model', '#model#'] },

      {
        type: 'or',
        ifDefined: 'managerInfo',
        args: [
          { type: 'startsWith', args: [ 'managers.name', '#managerInfo#' ] },
          { type: 'startsWith', args: [ 'managers.surname','managerInfo#' ] },
        ]
      }
    ]
  },

  storeName: 'managersCars',
  publicURL: '/managers/:managerId/cars/:id',

  handlePut: true,
  handlePost: true,
  handleGet: true,
  handleGetQuery: true,
  handleDelete: true,

  nested: [
    {
      type: 'lookup',
      localField: 'managerId',
      store: 'managers',
    }
  ],
});
var managersCars = new ManagersCars();

JsonRestStores.init();
managers.protocolListen( 'HTTP', { app: app } );;
managersCars.protocolListen( 'HTTP', { app: app } );;

You can see how for example in Managers, onlineSearchSchema has a mixture of fields that match the ones in the schema (name, surname) that look for a match in the corresponding fields, as well as search-specific fields (like carInfo) that end up looking into the nested children.

It's totally up to you how you want organise your searches. For example, you might decide to make a searchAll field instead for Managers:

onlineSearchSchema: new HotSchema({
  searchAll : { type: 'string', trim: 60 },
}),

queryConditions: {
  type: 'or',
  ifDefined: 'searchAll',
  args: [
    { type: 'startsWith', args: [ 'name', '#searchAll#'] }
    { type: 'startsWith', args: [ 'surname', '#searchAll#'] }
    { type: 'startsWith', args: [ 'managersCars.make', '#searchAll#' ] },
    { type: 'startsWith', args: [ 'managersCars.model','#searchAhh#' ] },
  ]
},

In this case, the only allowed field in the query string will be searchAll which will look for a match anywhere.

Sorting options and default sort

A client can require data sorting by setting the sortBy parameter in the query string. This means that there shouldn't be a sortBy element in the onlineSearchSchema attribute. JsonRestStores will parse the query string, and make sure that data is fetched in the right order.

In JsonRestStores you can also decide some default fields that will be used for sorting, in case no sorting option is defined in the query string.

The sortBy attribute is in the format +field1,+field2,-field3 which will instruct JsonRestStores to sort by field1, field2 and field3 (with field3 being sorted in reverse).

When you create a store, you can decide which fields are sortable:

For example:

var Managers = declare( Store, {

  schema: new Schema({
    name   : { type: 'string', searchable: true, trim: 60 },
    surname: { type: 'string', searchable: true, trim: 60 },
  }),

  storeName: 'managers',
  publicURL: '/managers/:id',

  handlePut: true,
  handle