gulp plugin to publish files to amazon s3
awspublish plugin for gulp
gulp-awspublish as a development dependency:
npm install --save-dev gulp-awspublish
Then, add it to your
var awspublish = require'gulp-awspublish';gulptask'publish'// create a new publisher using S3 options//var publisher = awspublishcreateparams:Bucket: '...';// define custom headersvar headers ='Cache-Control': 'max-age=315360000, no-transform, public'// ...;return gulpsrc'./public/*.js'// gzip, Set Content-Encoding headers and add .gz extensionpipeawspublishgzip ext: '.gz'// publisher will add Content-Length, Content-Type and headers specified above// If not specified it will set x-amz-acl to public-read by defaultpipepublisherpublishheaders// create a cache file to speed up consecutive uploadspipepublishercache// print upload updates to consolepipeawspublishreporter;;// output// [gulp] [create] file1.js.gz// [gulp] [create] file2.js.gz// [gulp] [update] file3.js.gz// [gulp] [cache] file3.js.gz// ...
Note: If you follow the aws-sdk suggestions for providing your credentials you don't need to pass them in to create the publisher.
Note: In order for publish to work on S3, your policiy has to allow the following S3 actions:
add an aws-credentials.json json file to the project directory with your bucket credentials, then run mocha.
create a through stream, that gzip file and add Content-Encoding header.
Create a Publisher.
Options are used to create an
aws-sdk S3 client. At a minimum you must pass
bucket option, to define the site bucket. If you are using the aws-sdk suggestions for credentials you do not need
to provide anything else.
Also supports credentials specified in the old knox
profile property for choosing a specific set of shared AWS creds, or and
secretAccessKey provided explicitly.
Create a through stream, that push files to s3.
Files that go through the stream receive extra properties:
publishwill never delete files remotely. To clean up unused remote files use
Create a through stream that create or update a cache file using file s3 path and file etag. Consecutive runs of publish will use this file to avoid reuploading identical files.
Cache file is save in the current working dir and is named
.awspublish-<bucket>. The cache file is flushed to disk every 10 files just to be safe.
create a transform stream that delete old files from the bucket. You can speficy a prefix to sync a specific directory.
syncwill delete files in your bucket that are not in your local folder.
// this will publish and sync bucket files with the one in your public directorygulpsrc'./public/*'pipepublisherpublishpipepublishersyncpipeawspublishreporter;// output// [gulp] [create] file1.js// [gulp] [update] file2.js// [gulp] [delete] file3.js// ...
aws-sdk S3 client is exposed to let you do other s3 operations.
Create a reporter that logs s3.path and s3.state (delete, create, update, cache, skip).
// this will publish,sync bucket files and print created, updated and deleted filesgulpsrc'./public/*'pipepublisherpublishpipepublishersyncpipeawspublishreporterstates: 'create' 'update' 'delete';
You can use
gulp-rename to rename your files on s3
// see examples/rename.jsgulpsrc'examples/fixtures/*.js'piperenamepathdirname += '/s3-examples';pathbasename += '-s3';pipepublisherpublishpipeawspublishreporter;// output// [gulp] [create] s3-examples/bar-s3.js// [gulp] [create] s3-examples/foo-s3.js
You can use
concurrent-transform to upload files in parallel to your amazon bucket
var parallelize = require"concurrent-transform";gulpsrc'examples/fixtures/*.js'pipeparallelizepublisherpublish 10pipeawspublishreporter;
You can use the
to upload two streams in parallel, allowing
sync to work with mixed file
var merge = require'merge-stream';var gzip = gulpsrc'public/**/*.js'pipeawspublishgzip;var plain = gulpsrc 'public/**/*' '!public/**/*.js' ;mergegzip plainpipepublisherpublishpipepublishersyncpipeawspublishreporter;
A router for defining file-specific rules https://www.npmjs.org/package/gulp-awspublish-router