Deploying and Scaling NodeJS apps on AWS

Deploy and Scale NodeJS Applications in AWS

This is an experiment in automating deployment and scaling of NodeJS applications. It lets you declare the structure of your deployment right next to your app and instantly push it into the cloud, as anything from a single server through to a highly complex cluster.

It uses Amazon Web Services for hosting, Amazon CloudFormation to describe a deployment, and awsbox for all the fiddly NodeJS deployment bits:


You might like to think of it as "awsbox on steroids". Legal, harmful-side-effect-free steroids.

The awsboxen process in a nutshell:

  1. Store your code in git. We assume you're working from a git checkout.
  2. Create a ".awsboxen.json" file at the top level of your project.
  3. Populate it with awsbox and CloudFormation configuration data
  4. Run "awsboxen deploy".
  5. Relax as your app is effortlessly deployed to the cloud.

The ".awsboxen.json" document describes the entire structure of the deployment. It includes awsbox config to specify the code and processes that should be run, and CloudFormation config to specify the physical resources on which to run them.

All deployment managment is done through the "awsboxen" command-line client. Here are the major modes of operation:

awsboxen deploy [--profile=PROFILE]

This command lets you deploy a new version of your code into the cloud. You specify an optional deployment profile, and a unique name for this particular deployment.

This command will:

  • Parse and load the .awsboxen.json file from the current directory.
  • Find all the declared boxen, and use awsbox to create an AMI for each with the appropriate version of the code.
  • Serialize the CloudFormation description and pass it up to AWS to create or update the deployment.
  • Wait until the deployment has completed, and report success or failure.

The same command works for creating a new deployment and updating an exsiting deployment to a new code version. Amazon CloudFormation has strong support for making safe updates to an existing deployment, as described here:

This approach allows you to version-control your evolving deployment stack right alongside the actual code. New version adds another type of server, opens new network ports, and increases the size of the database? No problem, CloudFormation will take care of it with as little downtime as possible. Want a staged rollout of new instances to your auto-scaling group? No problem, CloudFormation can do that for you.

awsboxen freeze [--profile=PROFILE] [...]

Generate the frozen awsbox AMIs for all declared boxen, or for just the boxen named on the command-line. This may be useful if you want to use awsboxen for development, then plug the AMIs into some other system for final production deployent.

awsboxen showconfig [--profile=PROFILE]

This command will print the CloudFormation configuration as would be sent up to AWS, along with the processed list of Boxen definitions. It's very useful for debugging our configuration.

awsboxen list

This command will list the name of all current deployment stacks.

awsboxen info

This command gets information about a current deployment stack, including:

  • status of the stack
  • any "outputs" declared in the CloudFormation config
  • eventually this will report the deployed version of the code

awsboxen teardown

This command destroys a deployment stack, deallocating all the corresponding AWS resources. It's very highly descructive and cannot be undone, so due care should be taken!

The structure of your AWS deployment is described using the AWS CloudFormation language, with some shortcuts and helpers to make things a little more convenient.

Conceptually, you provide a file ".awsboxen.json" with a full description of the desired deployment structure - all machine images, load balancers, databases, everything. But that can be pretty complicated, so let's work up to it slowly. Here's the simplest possible ".awsboxen.json" file::

  "processes": [ "server.js "]

Yes, this is just an awsbox deployment file! At deploy time awsboxen will fill in some sensible defaults, assuming that you want a single all-in-one server instance like you'd get from vanilla awsbox. It will expand the description into something like the following::

  // Description automatically generated from repo name.

  "Description": "awsboxen deplpyment of example-server",

  // Enumerates the different types of boxen in this deployment.
  // Each entry is an awsbox configuration, which will be frozen into
  // an AMI and can be referenced in the "Resources" section.
  // In this case, we have only a single type of box.

  "Boxen": {
    "DefaultBox": {
      { "processes": [ "server.js "] }

  // Enumerates the physical resources that make up the deployent.
  // This might include a load balancer, a database instance, and some
  // EC2 instances running boxen that were defined above.
  // In this case we have a single server instance.

  "Resources": {
    "DefaultBoxServer": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        "InstanceType": "m1.small",
        "ImageId": { "Ref": "Boxen::DefaultBox" },


As your needs grow, you can fill in more and more of the deployment description manually rather than relying on the defaults.

You can also create multiple deployment profiles (e.g. one for dev, one for production) by populating the key "Profiles" with additional CloudFormation configs. It will be merged into the main configuration when that profile is selected::


  "Boxen": { "WebHead": { "processes": [ "server.js "] } },

  //  By default we use a small instance, for development purposes.

  "Resources": {
    "WebHead": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        "InstanceType": "m1.small",
        "ImageId": { "Ref": "Boxen::DefaultBox" },

  //  But we use a large instance when running in production.

  "Profiles" {
    "Production": {
      "Resources": { "WebHead": { "Properties": {
        "InstanceType": "m1.large"

The special profile name "Default" will be used if present when no explicit profile has been specified on the command-line.

The CloudFormation language can be pretty cumbersome, so we offer some handy shortcuts. You can use YAML instead of JSON, and if you specify a directory instead of a file then it will produce a dict with keys corresponding to child file names. The above example could be produced from a directory structure like this::


These are the things that don't work yet, in roughly the order I plan to attempt working on them:

  • Controllable logging/verbosity so that you can get feedback during the execution of various commands.
  • Add a "deploy --dry-run" command which prints a summary of the changes that will be made, and highlights any potential downtime or destruction of existing resources.
  • Try to read the event stream during creation/teardown, for better feedback on what's happening
  • Make it easier to inject configuration via cloud-init. Currently you have to write a user-data script that sets the appropriate config files.
    • Idea: a "Plumbing" section in the config, where you can specify json files to write into the AMI. We translate it into cloud-init commands during pre-processing.
  • Handling of production secrets e.g. SSL certs.
  • Cleaning up of old AMIs, and related snapshots.
  • If "awsboxen deplopy" is interrupted, rollback the in-progress deployment. A good idea, or a terrible one?