How Travis CI helps us deploy to AWS Lambda

“What is Lambda, and how did we solve deployments?”
Keyboard in dark

Note: This article was published several years ago. Please note that the information may not be accurate or relevant anymore.

Want to simplify your infrastructure and reduce your operations overhead? Consider using AWS Lambda, and let Travis help you with the deployment.

If you don’t know AWS Lambda, here’s how Amazon describes it:

AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you.

Put simply, you write the code and AWS runs it for you. The “runs it for you” part can be either event-driven, scheduled, or via a manual trigger.

If you’re on the move to a more decoupled service architecture like we are, this can be a big help.


Why should you care about AWS Lambda?

AWS Lambda forces you to decouple your infrastructure. If all you have is a stateless function that sets off to work given some input parameters, slim interfaces are not only a goal, they’re a requirement.

On top of that, Lambda can help your company save money. In our case we are paying about a third of the price compared to when we were using EC2.

And lastly, you don’t have to sweat the details about provisioning servers and handling load.

If you want to gain experience, start by experimenting with some simple task. For example, we started with archiving chores to learn more about Lambda.

How to get your code into AWS Lambda

AWS Lambda is organized around functions. These functions are the service’s main entity. You’re given an interface to call functions and pass them parameters, and you can do pretty much everything you want in them, as long you stick to the rules.

You might start writing code in the AWS Console at first. But if you want to get serious, you’ll soon develop the code on your local machine. When it’s time to deploy, you zip it along with any dependencies (in AWS speak, you create a deployment package) and upload that to your function.

Of course, existing functions can be updated with new code.

You can also upload the deployment package to S3 where AWS Lambda loads it from. This is crucial to what we do.

A walkthrough of our Lambda deployment

Even though Travis officially supports Lambda deployments, we don’t use them. We use Travis to build the deployment package and upload it to S3.

This de-couples the build from the rollout which has several benefits:

  1. We can rollback the change without another Travis build: In case we ever need to revert a change that is already in production, we already have the previous build around. We’re not dependent on Travis being up when we need to push a hotfix out ASAP. Storage is cheap, so we keep old builds around forever.
  2. We can support different environments, e.g. for staging and production (read more on how we do that later on)
  3. We can test branches without merging to master (this allows deploy-verify-merge workflows like Github Flow)

Once the package is on S3, you just have to make a call to Lambda’s UpdateFunctionCode API and point it to the S3 location. You can deploy any version any time because you can keep all deployment packages around in S3.

We use a web application for coordinating deployment and rolling out Lambda functions. You could also have a script for that or integrate it with Hubot – you name it!

Support different environments

Doing the deployment this way allows you to support different environments, like staging and production.

Let’s say you write a function to process records from a Kinesis stream, and you have different streams for staging and production. You’d set up a build matrix in your .travis.yml:

env:
  matrix:
    - >
      ENVIRONMENT=staging
      KINESIS_STREAM_NAME=stream-staging
    - >
      ENVIRONMENT=production
      KINESIS_STREAM_NAME=stream-production

Then you’d set up the S3 deployment to upload to different paths in the same bucket:

deploy:
  provider:          s3
  …
  upload-dir:        $TRAVIS_REPO_SLUG/$ENVIRONMENT/$TRAVIS_COMMIT
  …

Note that upload-dir contains the environment and the commit hash so you can have one build per commit.

Next, configure your build in a way that it packages the environment variables from the build matrix into a configuration file and read that from your code.

You end up with two (or more) deployment packages on S3 that are self-contained because they include the environment-specific configuration. You can easily deploy them to their respective Lambda functions.

In terms of setup, it’s not as smooth as Travis’ official Lambda deployment – but it gives you a ton of flexibility that you would miss otherwise.

Why not use the built-in versioning?

Lambda offers basic versioning itself:

We recommend you use versioning and aliases to deploy your Lambda functions when building applications with multiple dependencies and developers involved.

– AWS Lambda Function Versioning and Aliases

So why aren’t we using this instead?

Lambda versioning adds complexity elsewhere, and we found it hard to inspect what’s deployed right now (S3 uses MD5 as ETag, Lambda uses SHA256 for uploaded code).

The path we’ve chosen offers more flexibility, and we’re now using the same rollout process for Lambda functions and single-page applications. That’s why it made sense for us to do it this way.

As usual, your mileage may vary.

Photo by Christian Wiediger on Unsplash

Want to join our Engineering team?
Apply today!
Share: