tiny in-memory CouchDB wannabe


This is an experiment: assuming a CouchDB dataset small enough to replicate to the client, why not simply keep it in memory?

Access the data with synchronous calls (including a simple "full table scan" style query mechanism) while still keeping basic track of changes. Then let some other code handle the "eventual consistency" part in the background — asynchronously replicating to something like PouchDB or CouchDB in the background.

What could possibly go wrong?

NOTE: Said replication engine is not implemented as of yet, which does put a damper on the relationship.

First <script src="memcouch.js"></script> or npm install memcouch as needed. Here's an example of usage:

var memcouch = require('memcouch');
// ^^^ don't use above in browser

var db = memcouch.db();
db.put({_id:'zero', number:0});
db.put({_id:'aaaa', number:3});

// get all documents, sorted by number (pass `true` or a custom comparator)
db.query(function (doc) { this.emit(doc.number); }, true);

// array of all long (in this case, autogenerated) document _ids
var min = 4;
db.query(function (doc) { if (doc._id.length > min) db.emit(); }).map(function (row) { return doc._id; });

var lastSeq = null;
function watcher(changeResult) {
    lastSeq = changeResult.seq;
    console.log(changeResult.doc._id + " changed!");
var doc = db.get('zero');
doc.number = Infinity;
db.put(doc);      // will log

doc.number = 0;
db.since(lastSeq);      // array of one changeResult
  1. Implement default comparison for emitted arrays/objects
  2. Figure out what's still needed, and implement replication to PouchDB and/or CouchDB (in separate files)
  3. Profit!