S3renity
WARNING: The s3renity package is being deprecated and moving to s3-lambda.
S3renity enables you to run batch functions on S3 objects with concurrency control. Set the context to a directory or key prefix, then run familiar functions such as forEach
, map
, reduce
, or filter
on all those objects. S3renity is promise-based, so you can chain operations together, as well as interact with the base api with promises instead of callbacks.
At Littlstar, we use S3renity for data cleaning, exploration, and pipelining.
Install
npm install s3renity --save
Quick Example
const S3renity = ; // example optionsconst s3renity = access_key_id: 'aws-access-key' secret_access_key: 'aws-secret-key' show_progress: true verbose: true max_retries: 10 timeout: 1000; const bucket = 'my-bucket';const prefix = 'path/to/files/'; s3renity ;
Batch Functions
Perform sync or async functions over each file in a directory.
- forEach
- each
- map
- reduce
- filter
First Step: Setting Context
Before calling a batch function, you must tell s3renity what files to operate over. You do this by calling context
, which returns a promise, so you can chain it with the batch request. The context function takes four arguments: bucket, prefix, marker, limit, and reverse.
s3renity // .forEach()...chain a batch function here // you can also provide an array of contexts like thisconst ctx1 = bucket: 'my-bucket' prefix: 'path/to/files/1/' // marker: 'path/to/files/1/somefile';const ctx2 = bucket: 'my-bucket' prefix: 'path/to/files/2/' // marker: 'path/to/files/2/somefile';s3renity // .forEach()...
forEach
forEach(fn[, isasync])
Iterates over each file in a s3 directory and performs func
. If isasync
is true, func
should return a Promise.
s3renity
each
each(fn[, isasync])
Performs fn
on each S3 object in parallel. You can set the concurrency level (defaults to Infinity
).
If isasync
is true, fn
should return a Promise;
s3renity // operates on 5 objects at a time
map
map(fn[, isasync])
Destructive. Maps fn
over each file in an s3 directory, replacing each file with what is returned
from the mapper function. If isasync
is true, fn
should return a Promise.
const addSmiley = object + ':)'; s3renity ;
You can make this non-destructive by specifying an output
directory.
const outputBucket = 'my-bucket';const outputPrefix = 'path/to/output/'; s3renity
reduce
reduce(func[, isasync])
Reduces the objects in the working context to a single value.
// concatonates all the filesconst reducer = { return previousValue + currentValue}; s3renity ;
filter
filter(func[, isasync])
Destructive. Filters (deletes) files in s3. func
should return true
to keep the object, and false
to delete it. If isasync
is true, func
returns a Promise.
// filters empty filesconst fn = objectlength > 0; s3renity
Just like in map
, you can make this non-destructive by specifying an output
directory.
s3renity
S3 Functions
Promise-based wrapper around common S3 methods.
- list
- keys
- get
- put
- copy
- delete
list
list(bucket, prefix[, marker])
List all keys in s3://bucket/prefix
. If you use a marker, the s3renity will start listing alphabetically from there.
s3renity ;
keys
keys(bucket, prefix[, marker])
Returns an array of keys for the given bucket
and prefix
.
s3renity
get
get(bucket, key[, encoding[, transformer]])
Gets an object in s3, calling toString(encoding
on objects.
s3renity
Optionally you can supply your own transformer function to use when retrieving objects.
const zlib = ; const transformer = { return zlib;} s3renity
put
put(bucket, key, object[, encoding])
Puts an object in s3. Default encoding is utf8
.
s3renity
copy
copy(bucket, key, targetBucket, targetKey)
Copies an object in s3 from s3://sourceBucket/sourceKey
to s3://targetBucket/targetKey
.
s3renity
delete
delete(bucket, key)
Deletes an object in s3 (s3://bucket/key
).
s3renity