Naive Continuous Deployment on ECS with AWS CodeBuild, CodePipeline and Lambda

WHAT IS CI/CD?

Continuous Integration refers to the practice of running the integration tests against every change in the code repository.

Continuous Delivery and Deployment refer to the practice of releasing automatically every change as it passes through the production pipeline

We’ll cover only the Continuous Deployment in this post.

Continuous Deployment with CodePipeline

We’ll go through a naïve CD setup with CodeCommit, CodeBuild, CodePipeline, Lambda, ECR and ECS.

We already have ECS set up to always pull the latest tag of the ECR repository image. We want CodePipeline to build a docker image from our sources and tag it as latest then to restart the ECS tasks so they pull the new image.

CodeBuild

We’ll be using CodeBuild to build our docker image.

First it will login to ECR, it will build our image, tag it and then push it to ECR. This is done via a Buildspec.yaml file that CodeBuild expects at the root of your code repository.

Buildspec.yaml

version: 0.2
phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
      - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker image...
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

Let’s create a new CodeBuild project:

  • sources: CodeCommit or your code repository like Github etc
  • How to build:
    • Environment Image: Use an image managed by AWS CodeBuild
    • OS: Ubuntu
    • Runtime: Docker
    • Runtime Version: aws/codebuild/docker:17.09.0
  • Cache: A relatively new feature is to cache artefacts like docker images for faster image build so select a S3 Bucket
  • Advanced Settings:
    • Timeout 1 hour
    • Environment variables: We’re adding our information as plaintext
    • AWS_DEFAULT_REGION : ‘eu-west-1’
    • AWS_ACCOUNT_ID : your-account-number
    • IMAGE_REPO_NAME : your-registry-name
    • IMAGE_TAG : ‘latest’

CodePipeline

Let’s use AWS CodePipeline to trigger builds when changes are made to the master branch.

We choose our source provider (S3/CodeCommit/Github/etc) and our code repository as well as the branch (master in our case)

Next we choose build provider as the CodeBuild project that we already set up in the previous step.

Next we need to choose a Deploy provider but we’ll do this via Lambda so leave No Deployment for now.

Even if there’s an option to use ECS deploys, it’s not exactly what we want. We already have our own ECS cluster setup and just need to update with the new image and don’t need to deploy a new CloudFormation template or a new cluster/service.

Lambda deploy

Next step in the pipeline is for Lambda to trigger a deployment by stopping all the tasks after the build succeeds. Depending on how you set it up you might want to stop them one by one to have maximum availability.

The deploy function looks something like this:

'use strict';

var AWS = require('aws-sdk');

exports.handler = function (event, context) {
  var ecs = new AWS.ECS({apiVersion: '2014-11-13'});
  var userParams = event["CodePipeline.job"].data.actionConfiguration.configuration.UserParameters;
  userParams = JSON.parse(userParams);
  var codepipeline = new AWS.CodePipeline();
  var jobId = event["CodePipeline.job"].id;

  var cluster = userParams.cluster;
  var serviceName = userParams.serviceName;

  var ecsParams = {
      cluster: cluster,
      serviceName: serviceName,
  };

  // Notify AWS CodePipeline of a successful job
  var putJobSuccess = function(message) {
      var params = {
          jobId: jobId
      };
      codepipeline.putJobSuccessResult(params, function(err, data) {
          if(err) {
              context.fail(err);      
          } else {
              context.succeed(message);      
          }
      });
  };

  // Notify AWS CodePipeline of a failed job
  var putJobFailure = function(message) {
      var params = {
          jobId: jobId,
          failureDetails: {
              message: JSON.stringify(message),
              type: 'JobFailed',
              externalExecutionId: context.invokeid
          }
      };
      codepipeline.putJobFailureResult(params, function(err, data) {
          context.fail(message);      
      });
  };

  try {
   ecs.listTasks(ecsParams).promise()
    .then(function (data) {

      var tasks = data.taskArns;
      var promises = [];

      for(var arn in tasks) {
        promises.push(ecs.stopTask({
          task: tasks[arn],
          cluster: cluster,
          reason: "Deploying"
        }).promise());
      }

      return Promise.all(promises);
    })
    .then(function(data) {
      putJobSuccess("Success");
    }).catch(function(err) {
      putJobFailure(err);
    }); 
  } catch(e) {
    putJobFailure(e);
  }    
};

The user parameters that we set up in CodePipeline were: {"cluster": "my-prod-cluster", "serviceName": "my-prod-service"}

Lambda Slack notification

Next we need to be alerted on the result of the task. Let’s start by adding a Slack alert via lambda. You’ll need to create a slack bot and get it’s service URL and give it access to your desired channel.

Setting up the Lambda node code:

    var services = '/services/SERVICE_ID_HERE';  // Update this with your Slack service...
    var channel = "#deploy"  // And this with the Slack channel

    var AWS = require('aws-sdk');
    var https = require('https');
    var util = require('util');

    exports.handler = function(event, context) {

      var codepipeline = new AWS.CodePipeline();
      var jobId = event["CodePipeline.job"].id;

      var userParams = event["CodePipeline.job"].data.actionConfiguration.configuration.UserParameters;
      userParams = JSON.parse(userParams);


      // Notify AWS CodePipeline of a successful job
      var putJobSuccess = function(message) {
          var params = {
              jobId: jobId
          };
          codepipeline.putJobSuccessResult(params, function(err, data) {
              if(err) {
                  context.fail(err);      
              } else {
                  context.succeed(message);      
              }
          });
      };

      // Notify AWS CodePipeline of a failed job
      var putJobFailure = function(message) {
          var params = {
              jobId: jobId,
              failureDetails: {
                  message: JSON.stringify(message),
                  type: 'JobFailed',
                  externalExecutionId: context.invokeid
              }
          };
          codepipeline.putJobFailureResult(params, function(err, data) {
              context.fail(message);      
          });
      };


  var postData = {
      "channel": channel,
      "username": "Codebuild",
      "text": "*" + "Build for " + userParams.service + " on " + userParams.env +" succeeded!" + "*"
  };


  var options = {
      method: 'POST',
      hostname: 'hooks.slack.com',
      port: 443,
      path: services  // Defined above
  };

  var req = https.request(options, function(res) {
    res.setEncoding('utf8');
    putJobSuccess("Success");
  });

  req.on('error', function(e) {
    console.log('problem with request: ' + e.message);
    putJobFailure(e);
  });

  req.write(util.format("%j", postData));
  req.end();
};

The user parameters we’re sending to Lambda via CodePipelineare: {"env": "production", "service": "my-prod-service"}

And we’re done!

You can Release change in CodePipeline to trigger a new deploy and receive a Slack alert on deployment channel.

Gabriel MANOLACHE

big data engineer, machine learning enthusiast

Paris, France https://gmanolache.com