SPARQL router 0.4.0
The NodeJS/Express application that powers queery.link to serve canned SPARQL queries to the world.
SPARQL is the query language to retrieve data from RDF triple stores. I often had the issue that fellow developers or data fanatics asked for data that was in a triple store, but they don't know SPARQL.
This server application solves the issue:
- You write the query and gives it a name (e.g.
- You save it under /tables, /graphs or /update, depending on the query type (SELECT, CONSTRUCT, DESCRIBE, SPARQL Update)
- You give the URL to your fellow developer, picking the right format for their usage:
http:/yourhost/api/tables/biggest-asian-cities.csv for manipulations as a spreadsheet
http:/yourhost/api/tables/biggest-asian-cities.json as input for a Web app
http:/yourhost/api/tables/biggest-asian-cities.xml if they are into XML
- They get fresh updated results from the store every time they hit the URL!
- Exposes SPARQL queries as simple URLs, with choice of result format
- A canned query is a simple file located in
/public/api/ask depending on the query type (SELECT, CONSTRUCT, DESCRIBE, SPARQL Update)
- Beside using FTP and SSH, you can POST a new canned query to
- For more query reuse, possibility to populate variable values in the query by passing parameters
- Supports content negotiation (via the
Accept HTTP header)
- Possibility to GET or POST a SPARQL query on
/api/sparql and get the results, without saving it
A screenshot of the tests as overview of the features.
Configuration and detailed usage documentation
- NodeJS (4.x, 5.x, 6.x) and NPM must be installed. They are also available in most Linux package managers as
- An RDF triple store that supports SPARQL 1.1 and JSON-LD output.
git clone https://github.com/ColinMaudry/sparql-router.git --depth=1
npm install --production
SPARQL router is also available as an NPM package.
On the wiki.
Once it's configured, you must initialize the system queries and test queries:
I haven't found a proper way to mock a triple store for testing purposes. I consequently use a remote triple store. That means the tests only work if the machine has Internet access.
The configuration used for the tests is stored in
First, make sure, you have all the dev dependencies installed:
Tests rely on mocha and
supertest for the API, and on nightwatch for the frontend.
To run the API tests:
Overview of the API tests.
To run the frontend tests:
# Make sure the dev dependencies are installed
# Start the server in development mode with the test configuration
NODE_ENV=test npm run dev
# Run the frontend tests
npm run test-ui
config/default.json configuration file:
config/myconfig.json configuration file:
NODE_ENV=myconfig npm start
Start in debug mode:
DEBUG=functions,routes npm start
If you want the app to restart automatically after fatal errors, I suggest you use forever.
When forever is installed globally, run the following command in the
See this wiki page for detailed instructions: Using SPARQL router
The API documentation can be found here (development version). If you're running the app, at
Actions that require authentication
The actions that are not read-only on the canned queries or the data require basic authentication.
- HTTP PUT to create or update a query
- HTTP DELETE to delete a query
If SPARQL router doesn't match your requirements, you can have a look at these solutions:
- The Datatank (PHP5) "The DataTank is open source software, which you can use to transform any dataset into an HTTP API."
- BASIL (Java) " BASIL is designed as middleware system that mediates between SPARQL endpoints and applications. With BASIL you can build Web APIs on top of SPARQL endpoints."
- User interface using Vue.js 1 and Bootstrap 3 (see https://queery.link)
- @OpenTriply's YASQE as the editor (http://yasqe.yasgui.org/)
- Table query results
- Single page application (= very fast transitions)
- Possibility to delete a query
- Requesting .rq or application/sparql-query returns the query text instead of the query results
- An arbitrary endpoint can be passed with canned queries (upon creation or update) and with passthrough queries
- Metadata (name, author) can be passed with canned queries (new and updates) and with passthrough queries. Creation and modification dates are added automatically
- The system endpoint stores the canned queries metadata
- The default endpoint is the endpoint that is used if no endpoint is provided by the client
- Added support for ASK queries on
- Started work on UI, using VueJS (just wireframes for now)
- Updated the API documentation accordingly
- Added extra info upon app startup (used config, endpoint, app URL, etc.)
- App authentication can be disabled in configuration
- README mentions the Datatank and BASIL alternatives
- Added an
npm start command for commodity
- Improved installation instructions
- Added pictures to explain how this thing works
- Improved information about the demo
- Support for SPARQL Update queries (requires authentication)
- Possibility to populate query variable values via URL parameters! (#10)
- Queries created and updated via HTTP POST are tested before creation/update
- Possibility to setup user:password for the configured endpoint (Basic authentication)
- The URL of the query is returned when creating or updating a query
- Tested on Fuseki 2.x, Dydra, Stardog 4.0.5, OpenLink Virtuoso (LOD cache)
- More useful error messages
- Applied NodeJS security best practices (with helmet)
- Enabled canned queries
- Extension (.csv, .xml, etc.) defines the format returned by the endpoint
- Passthrough queries via
- Create new canned queries by HTTP POST, SSH or FTP
- Basic auth for POST and DELETE
- API doc written in Swagger
- Support for HTTPS endpoints
- CORS support
If you use it, I'd really appreciate a public statements such as a tweet!