The 3 Musketeers: How Make, Docker and Compose enable us to release many times a day

[amaysim note: this is a re-publication of a blog post originally published in February 2018]

A couple of years ago when a new Engineer joined amaysim it could take them a few days to set up their machine with locally running versions of our applications (before they could even start to understand how the apps worked and then start the path to becoming productive). Then, depending on the application, deploying to production was often a black art, the mastery of which was known only to a few (unfortunate) chosen souls.

Fast-forward to today and the number of applications we build and support has more than quadrupled, but every Engineer can pull down almost any application, have it up and running and the tests passing in less than 10 minutes. And then they can deploy to production.

This is the story of how the 3 Musketeers enabled this change.

Build and Release Challenges

amaysim has both inherited and built software that’s written in a variety of different languages and is deployed and managed using a range of tooling and stacks. This has lead to a vibrant and dynamic ecosystem and the flexibility (within some bounds) to use the right tools for the right job. On the flip side; as we’ve grown, this has sometimes come at a price:

  • Setting up local development environments became time consuming and complex when you had to deal with multiple projects, each with its own set of dependencies and requirements.

  • We think it’s important to give Engineers the choice to build using the OS that best suits their development style. But we found that engineers developing on MacOS could sometimes end up with very different development environments to those on Windows — which were themselves different to production. This was counter to the guidelines laid out in the 12 Factor App (particularly point 10 that focuses on Development and Production Parity) and would lead to the classic adage when trying to troubleshoot issues of “hmm, but it works locally for me…”.

  • CI/CD servers and agents need to also support the different requirements of our application ecosystem. We found that there was an initial time and complexity cost in setting these up (for each application, runtime, dependencies etc), but then also a maintenance cost going forwards.

  • Many of our critical build and deployment scripts were originally handcrafted and saved via the CI/CD server web interface. This made it hard to track changes to pipelines, sometimes lead to copy and pasting of build scripts in an uncontrolled manner and of course required a backup strategy in place to mitigate the risk of losing everything in the event that a CI machine died.

  • Finally, many of these core build, test and deployment scripts were written by a single person with no real visibility offered to the whole of the team. That person sometimes wasn’t even in the team responsible for the code being deployed. In other words, Engineers were not aware or clear of what actually happened after pushing their code and it arriving on a QA or Production environment. When inevitably something went wrong during the build and deployment process, it became a time-consuming and arduous process for the team to track down and isolate the problem and before attempting to fix.

Time for Change

As our team and platforms grew, we saw that we need to change how we built and deployed our applications. We realised that amongst other things, we’d been missing consistency in our testing and releasing process and tooling. We were missing convention over configuration.

We needed to apply the same rigorous software engineering principles to our development and deployment tools as we did for all our other software.

We set ourselves some challenging goals for how we wanted to be able to build and release our software, which would guide us towards our ideal solution. These included:

  • Any Engineer should be able to check out any project and get it, or its tests, running in less than 10 minutes, using a standard and common interface.

  • An Engineer should be able to make a code change and release to production on their first day.

  • All build pipelines should follow a consistent pattern and the code that runs them should be source controlled, subject to peer review and be part of each application’s codebase.

  • All local development, QA, Pre-Prod and Production environments should be identical (or as similar to one-another as possible).

And fundamentally:

All applications should abide to the ‘principle of least surprise’ in their build and deployment process.

The 3 Musketeers

The challenges we originally faced at amaysim are not uncommon. I‘ve seen them elsewhere and developed a pattern based on Docker to mitigate them in the past. At amaysim, there was the opportunity to apply it at a much larger scale, perfect and enhance it. The pattern now uses the combination of DockerDocker Compose, and Make, which we’ve dubbed “The 3 Musketeers”.

What is it?

In a nutshell, the 3 Musketeers is a pattern for developing software in a repeatable and consistent manner. It leverages Make as an orchestration tool to test, build, run, and deploy applications using Docker and Docker Compose. The Make and Docker/Compose commands for each application are maintained as part of the application’s source code and are invoked in the same way whether run locally or on a CI/CD server.

The Tools

Docker is the most important musketeer of the three. I’ve always loved Docker since I started learning it years ago. It changed the way I perceived software development: testing, building, running, and deploying can all be done inside a lightweight Docker container — which can be run on different operating system.

Docker Compose, or simply Compose, manages Docker containers in a very neat way. It allows multiple Docker commands to be written as a single one, which allows our Makefile to be a lot cleaner and easier to maintain. Testing also often involves container dependencies, such as a database, which is an area where Compose really shines. No need to create the database container and link it to your application code container manually — Compose takes care of this for you.

Make is a cross-platform build tool to test and build software and it is used as an interface between the CI/CD server and the application code. A single Makefile per application defines and encapsulates all the steps for testing, building, and deploying that application. Of course other tools like rake or ant can be used to achieve the same goal, but having Makefile pre-installed in many OS distributions makes it a convenient choice.

An Example

To illustrate the 3 Musketeers in action, we are going to test and build a very simple AWS Lambda in Go, and deploy it to AWS using the Serverless Framework.

Note: Go and Serverless Framework are being used for this example but the 3 Musketeers is applicable to any language and deployment tool as long as Docker supports it.

In a nutshell, the application returns the value of the environment variable ECHO_MESSAGEon a GET /echo request. The Go sample application is below:

Compose

Given our application example is written in Go and the Serverless Framework is going to be used to deploy it to AWS, the docker-compose.yml file defines two services: golang and serverless. The golang service will be used for testing and building the application, andserverless for deploying.

To be able to test and compile a Go application, the source code must be inside the src folder of the Go environment. So the service golang sets the volumes and working_dir to the default Go path of the image flemay/golang with the project url/go/src/github.com/flemay/3musketeers/examples/lambda-go-serverless.

Applications at amaysim follow the twelve-factor app methodology and therefore configurations are stored within the environment. Deploying our application requires some information like the value of the message to be echoed and AWS configuration. These environment variables are stored in a.env file and passed to the service serverless.

An example of a .env file for our application looks like the following:

ECHO_MESSAGE="Thank you for using the 3 Musketeers!"
ENV=dev"
AWS_REGION="ap-southeast2"
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_PROFILE

The AWS credentials environment variables here are not set but will be passed to the container if they are set on the host. This is optional as the service serverless also mounts the host ~/.aws directory so that AWS credentials can also be provided by file.

Make

The Makefile contains targets that reflect the application’s life cycle like test, build, and deploy.

It is common for each of these stages to have dependencies. Test and build may require development packages, whereas deploy may need the application as a build artifact prior deploying.

From a CI/CD perspective, each stage can be executed on a different agent, and therefore, it is important to check the presence of the dependencies before executing the tasks.

Dependencies

Our simple application has few Go dependencies defined in Gopkg.toml that are needed for testing and building. Thus Makefile has a target deps which follows a specific pattern: it calls the Compose service golang to execute the target _depsGo inside a Docker container.

_depsGo installs the required packages using the Go dependency management tool dep. It also makes an artifact $(GOLANG_DEPS_ARTIFACT) as a zip file for dependencies to be passed along through the stages. This step is quite useful as it acts as a cache and means subsequent CI/CD agents don’t need to re-install the packages again when testing and building.

deps:    docker-compose run --rm golang make _depsGo _depsGo:    dep ensure zip -rq $(GOLANG_DEPS_ARTIFACT) $(GOLANG_DEPS_DIR)/

Using targetName and _targetName is a naming convention to distinguish targets that can be called on any platform versus those that need specific environment/dependencies. Heredeps can be run on Windows, Linux, or MacOS since it assumes that Compose is installed. On the other hand, _depsGo requires a Go environment and the tool dep which may not be installed on the host.

Testing

Dependencies installed, the application can be tested. The target test uses the same pattern as seen just before. It executes inside a container the target _test, which calls the go test tool.

test needs some Go packages that are found in the vendor directory defined by$(GOLANG_DEPS_DIR). The Makefile has a target $(GOLANG_DEPS_DIR) which unzips the artifact $(GOLANG_DEPS_ARTIFACT) if the vendor directory does not exist. However, if the dependencies artifact is not present, test simply fails.

test: $(GOLANG_DEPS_DIR)     docker-compose run --rm golang make _test _test:    go test -v

Building

If all tests passed, the build process can be started with the target build. Again, the target depends on the Go dependencies and executes another target, _build, in a container.

_build compiles the application and creates an artifact (zip file) to be deployed to AWS.

build: $(GOLANG_DEPS_DIR)    docker-compose run --rm golang make _build _build:    GOOS=linux go build -o bin/main zip -r $(PACKAGE).zip bin

Deploying

The final stage of our application’s life cycle is to deploy our application to AWS. The deploy target requires the build artifact defined by $(ARTIFACT_NAME) which was created by the target builddeploy calls _deploy which uses the Serverless Framework to deploy to AWS.

deploy: $(ARTIFACT_NAME)    docker-compose run --rm serverless make _deploy _deploy:    rm -fr .serverless sls deploy -v

Tying it all together

What we have now is a Makefile with the targets based on the life cycle of our simple application forming a pipeline. Each of the stages relies on Docker and Compose to be executed. This means the pipeline can be tested and executed on Windows, Linux, and MacOS. In other words, it is run the same way whether it is local or on a CI/CD server.

The pipeline for our example application looks like the following:

Lambda Go Serverless Pipeline

Stage deps of Lambda Go Serverless Pipeline

Each stage in the example above uses the command make and has the environment variables and artifacts set if required. The Makefile encapsulates the complexity of how an application is being tested, built, and deployed. It is easier to remember from a developer’s perspective, and makes the CI/CD pipeline much cleaner.

Try it yourself

  1. Open up a terminal and generate an example: docker run -it --rm -v $PWD:/opt/app -w /opt/app flemay/cookiecutter https://gitlab.com/flemay/cookiecutter-musketeers-lambda-go-serverless

  2. Go to the example directory: cd yourgitreponame

  3. Simply test and build the lambda: make envfileExample deps test build pack

Wrapping Up

amaysim has completely embraced this standardised and consistent approach to testing, building and deploying our software at amaysim. We now have well over a hundred pipelines that follow the pattern laid out by the 3 Musketeers.

The 3 Musketeers is technology agnostic, providing that Docker supports your chosen stack. This means it can be used to test, build and deploy Micro-services, Monoliths, and Serverless applications alike.

The 3 Musketeers brings us consistency between projects, people, computers, and CI/CD pipelines. Consequently, all Engineers at amaysim can download any project and get it up and running with all tests passing in less than 10 minutes, so can our CI/CD server.

It gives us freedom over the technologies we want to use instead of being vendor locked in or waiting for vendors to upgrade. And at the same time, it eases the management of CI/CD server and agents as they do not care about the technology anymore. No more agents with many programming languages, language versions, and package managers.

The 3 Musketeers also gives us confidence in our pipeline code. The pipeline is part of the project’s code repository and treated as such. It is developed, and tested locally by Engineers prior being part of the CI/CD server and all changes to the pipeline is version controlled. An application’s Pipelines now belong to and are owned by Engineers who maintain that application.

Moreover, the 3 Musketeers makes the development environment a step closer to production. Docker has a wealth of existing images that can be used to help with integration testing to replicate what it is being used in production. For instance, databases, selenium, and even AWS services. And Compose makes it easy to orchestrate those containers altogether.

In all, the 3 Musketeers has been an overwhelmingly positive force for helping establish consistency across our application’s life cycle and enabling us to develop more quickly and focus on solving business problems and not boilerplate.

References

Has any of the above piqued your interest? Does amaysim sound like the sort of place where you think you could make an impact? Do you thrive in organisations where you are empowered to bring change and constant improvement? Why not take a few minutes to learn more about our open roles and opportunities and if you like what you see then say hi, we’d love to hear from you..

Shout-out to all the lawyers…

The views expressed on this blog post are mine alone and do not necessarily reflect the views of my employer, amaysim Australia Ltd.

Previous
Previous

Paradise Lost

Next
Next

Introducing Continuous Delivery and amaysim