A cover image URLs cache exposing its content as a REST web service.
In various softwares (ILS for example), Book (or other kind of resources) cover image is displayed in front of the resource. Those images are fetched automatically from providers, such as Google or Amazon. Providers propose web services to retrieve information on Books from an ID (ISBN for example).
With Coce, the cover images URL from various providers are cached in a Redis
server. Client send REST request to Coce which reply with cached URL, or if not
available in its cache retrieving them from providers. In its request, the
client specify a providers order (for example
aws,gb,ol for AWS, Google, and
then Open Library): Coce send the first available URL.
Install and start a Redis server
Install node.js libraries. In Coce home directory, enter:
Configure Coce operation by editing config.json. Start with provided
port- port on which the server respond
providers- array of available providers: gb,aws,ol
timeout- timeout in miliseconds for the service. Above this value, Coce stops waiting response from providers
redis- Redis server parameters:
gb- Google Books parameters:
timeout- timeout of the cached URL from Google Books
ol- Open Library parameters:
timeout- timeout of the cached URL from Open Library. After this delay, an URL is automatically removed from the cache, and so has to be re-fetched again if requested
imageSize- size of images: small, medium, large
aws- Amazon AWS parameters. In order to use AWS, you need to create a credential. Create a user and give him credential to Amazon Product Advertising API. Alternatively, you can get Amazon cover images using a simpler http method, not requiring a credential.
method- service|http. If using http,
timeoutparameter is required, and none of any others.
host- The API is available for several locales. If omitted, USA by default. For France: webservices.amazon.fr. See the List of locales.
imageSize- size of images: SmallImage, MediumImage, LargeImage
timeout- timeout when probing images url via direct http requests
cd _Coce HOME_node app.js
By default, running Coce directly, there isn't any supervision mechanism, and Coce run as a multi-threaded single process (as any node.js application). In production, it is necessary to transform Coce into a Linux service, with automatic start/stop, and supervision. Traditional Unix process supervision architecture could be used: Unix System V Init, runit, or daemon.
A more sophisticated approach could be utilised by using Phusion Passenger. This way, it's possible to make Coce respond to requests on http (80) port, even with other webapps running on the same server, and to run a Coce process on each core of a multi-core server.
For example, on Debian follow this instructions. And start, coce, beeing in coce directory:
passenger start --port 8080
passenger start --port 8080 --daemonize
Since passenger manages the service restart automatically, the service startup can just be
/etc/rc.local on various Linux distributions.
To get all cover images from Open Library (ol), Google Books (gb), and Amazon (aws) for several ISBN:
This request returns:
&all parameter, the same request returns first URL per ISBN, by
coceclient.js module, which is use like this:
// isbns is an array of ISBNsvar coceClient = '' 'ol,aws,gb';coceClient;
coce is highly scalable. With all requested URLs in cache,
ab test, 10000 requests, with 50 concurrent requests:
ab -n 10000 -c 50 http://localhost:8080/cover?id=9780415480635,97808?1417492,2847342257,9780563533191&provider=gb,aws
gives this result:
Document Path: /cover?id=9780415480635,97808?1417492,2847342257,9780563533191 Document Length: 295 bytes Concurrency Level: 50 Time taken for tests: 7.089 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Total transferred: 4610000 bytes HTML transferred: 2950000 bytes Requests per second: 1410.70 [#/sec] (mean) Time per request: 35.443 [ms] (mean) Time per request: 0.709 [ms] (mean, across all concurrent requests) Transfer rate: 635.09 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 0.5 1 3 Processing: 9 34 16.5 32 288 Waiting: 7 29 16.4 27 278 Total: 12 35 16.5 34 290 Percentage of the requests served within a certain time (ms) 50% 34 66% 34 75% 37 80% 39 90% 44 95% 50 98% 54 99% 58 100% 290 (longest request)