A node project that aims to replicate the functionality of the Google Social Graph API


Elsewhere is a Node.js project that aims to replicate the functionality of the Google Social Graph API.

It does this by crawling a target url for rel=me microformated links. It then crawls those links for more rel=me links and so on, building a comprehensive graph as it goes.

Elsewhere provides a JSON API that URL's can be easily queried against via client JavaScript. It can also be included as a Node.js module and used directly in your server projects.

Once you've cloned the project and run an npm install, run the server located @ bin/elsewhere and point your browser at localhost:8888 to try it out.

To query aginst the example server API in your code, your queries most be formatted like so:

http://localhost:8888/?url=[url you wish to query]

The JSON it returns looks like this:

  results: [
      url: "http://chrisnewtn.com",
      title: "Chris Newton",
      favicon: "http://chrisnewtn.com/favicon.ico",
      outboundLinks: {
      verified: [ ... ],
      unverified: [ ]
      inboundCount: {
        verified: 4,
        unverified: 0
      verified: true
  query: "http://chrisnewtn.com",
  created: "2012-09-08T16:30:57.270Z",
  crawled: 9,
  verified: 9

To use Elsewhere as a node module, just clone it into the node_modules directory of your project and require it in your source.

var elsewhere = require('elsewhere');

The example code below builds a graph of http://premasagar.com and the promises interface to render the result.

elsewhere.graph('http://premasagar.com').then(function (graph) {

The graph method accepts a variety of options. Two of these (strict & stripDeeperLinks) only govern what toJSON returns and do not affect the graph itself.

  • strict: If this is set to true then toJSON will not return url which are unverified. An unverified url is any url which does not link to any other verified url. The url provided to the graph method is inherently verified.
  • stripDeeperLinks: If set to true then urls at deeper path depths than that of the shallowest on the same domain will be discarded.
  • crawlLimit The number of urls that can be crawled in a row without any successful verifications before the crawling of any subsequent urls is abandoned.

The default options as well as some more low level options can be found in lib/options.js.