Overview
alamo
is a wrapper around knox that provides an higher level abstraction
for s3 with handling of response status codes and automatic parsing of XML error bodies. It also provide a consistent
full (writing and reading) streaming interface, including multipart upload for large artifacts. Alamo implements
automatic retries on error with exponential back-off.
Why alamo?
- knox is quite low-level with regards to response and error handling which lead to code duplication to parse response status codes and errors for each request as pointed out by @domenic himself in https://github.com/Automattic/knox/issues/114
- the aws-sdk allow uploading streams (in an awkward way) but does not allow retrieving streams
- the aws-sdk allow multipart upload but is low-level and let a lot to implement by the caller
- knox-mpu allows multipart upload but buffers everything in memory which is only viable when uploading from your desktop without concurrency, but not viable from a server
- neither knox or knox-mpu implements retries which is problematic when uploading large artifacts to s3
- Fort Knox - Fort Alamo
Usage
npm install --save alamo
API
alamo.createClient(options)
Returns an s3 client. It accepts the same options as knox.createClient plus the following ones that set how retries work (see retry):
retries
: max number of retries (default 10)factor
: factor for retry (default 2)minTimeout
: minimum time for first retry (default 1000)maxTimeout
: maximum time for all retries (default 60000)
var client = ;
Client.prototype.client
Access to the lower level knox client
Client.prototype.createReadStream(filename, headers)
Returns a readable stream.
var fs = ; client;
filename
: the s3 file name to retrieveheaders
: optional headersoptions
: retry options
Alias: readStream
Client.prototype.createWriteStream(filename, headers, content)
Returns a writable upload stream. You can optionally pass a buffer to upload instead of piping to it.
var fs = ;var ws = client; fs ;
filename
: the s3 file name to upload toheaders
: optional headerscontent
: optional content to upload. If content is passed, it is passed to the underlyingrequest.end
options
: retry options
Alias: writeStream
Client.prototype.stream(method, filename, headers, content)
Generic stream implementation that accepts the method as 1st argument as 2nd argument
var fs = ;var ws = client; fs ;
method
: the http method e.g.GET
,PUT
filename
: the s3 file name to upload toheaders
: optional headerscontent
: optional content to upload. If content is passed, it is passed to the underlyingrequest.end
options
: retry options
Client.prototype.get(filename, headers, cb)
Get an object and retrieve the response with the body
client;
filename
: the s3 file name to retrieveheaders
: optional headersoptions
: retry optionscb
: callback that returns an error ornull
as 1st argument, and the response with the body if no error as 2nd argument
Client.prototype.del(filename, headers, cb)
Delete an object from s3
client;
filename
: the s3 file name to deleteheaders
: optional headersoptions
: retry optionscb
: callback that returns an error ornull
as 1st argument, and the response if no error as 2nd argument
Client.prototype.put(filename, content, headers, cb)
Put an object
client;
filename
: the s3 file name to upload tocontent
: content to uploadheaders
: optional headersoptions
: retry optionscb
: callback that returns an error ornull
as 1st argument, and the response if no error as 2nd argument
Client.prototype.post(filename, content, headers, cb)
Post an object
client;
filename
: the s3 file name to post tocontent
: content to postheaders
: optional headersoptions
: retry optionscb
: callback that returns an error ornull
as 1st argument, and the response if no error
Client.prototype.request(method, filename, content, headers, cb)
Generic non streaming interface
client;
method
: the http method e.g.GET
,PUT
,DELETE
,POST
filename
: the s3 file namecontent
: content to postheaders
: optional headerscb
: callback that returns an error ornull
as 1st argument, and the response with the body if no error
Client.prototype.signedUrl(filename, expiration, options)
Returns a signed url
var url = client;console;
filename
: the s3 file name to retrieveexpiration
: number of milliseconds that the signed url is valid foroptions
: signed url options passed to knox, takeverb
,contentType
, andqs
object
Client.prototype.multipart(filename, headers)
Returns a writable stream to upload using the s3 multipart API. The stream is uploaded by chunks of 5mb in parallel with max concurrent uploads and automatic retries
var fs = ;var ws = client; fs ;
filename
: the s3 file name to upload toheaders
: optional headersoptions
: retry options
Comparison with knox
Retrieve a stream with knox with full error handling
var fs = ;var XML = ;var req = client;req
Roadmap
- Handle redirects and other 30x status codes: http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
- Implement global max concurrent uploads
- Implement basic progress for multipart upload
- Accept string value for expiration that can be parsed by ms
- Add higher level functions for "file" upload / download
- Maybe use multipart upload automatically if content-length is unknown?
- Maybe allow automatic handling (parsing, marshalling) of json?
Contributions
Please open issues for bugs and suggestions in github. Pull requests with tests are welcome.
Author
Jerome Touffe-Blin, @jtblin, About me
License
alamo is copyright 2015 Jerome Touffe-Blin and contributors. It is licensed under the BSD license. See the include LICENSE file for details.