Exercise Notes
- Understand principles and processes for building and managing code, infrastructure and deployment
- Creation of a Docker container for an existing application
- Setting up continuous integration through GitHub Actions
- Publishing a Docker image
- VSCode
- Node (version 18)
- Mocha testing library (version 10.2.0)
- Sinon (version 15.0.3)
- Mocha-sinon (version 2.1.2)
- Chai (version 4.3.7)
- Docker Desktop (version 4.18.0)
- DockerHub
- GitHub Actions
Exercise Brief
In this exercise, you will containerise your TableTennisTable app using Docker. You’ll create separate Docker images to develop, test and deploy a production-ready version of the app. We’ll learn about writing custom Dockerfiles, multi-stage docker builds, and configuration management. Then you’ll set up continuous integration for your app using GitHub Actions. You’ll set up a workflow which will build a Docker image and use it to run your tests. Finally, you’ll extend the pipeline further to push production build images to Docker Hub.
Setup
Step 1: Checkout your current code
Check out your code from the TableTennisTable exercise that you did in the Tests – Part 1 module; it’ll form the starting point for this exercise.
For this exercise we will include a cap on the size of a league row configurable by an environment variable. The code for this has already been written but commented out, you just need to uncomment the relevant lines.
The isFull
method returned in league_row.js
should look like this:
isFull: function () { return players.length === maxSize /* getMaxSizeWithSizeCap(maxSize) again, ignore this */; },
Using the commented out code, replace the definiton with maxSize
with this :
isFull: function () { return players.length === getMaxSizeWithSizeCap(maxSize); },
The method getMaxSizeWithSizeCap
is defined at the top of the file and commented out – simply uncomment it.
Now the repository is ready for the exercise.
We have now included a cap on the size of a league row configurable by an environment variable.
Step 2: Install Docker
If you haven’t already, you’ll need to install Docker Desktop. If prompted to choose between using Linux or Windows containers during setup, make sure you choose Linux containers.
Create a production-ready container image
The primary goal of this exercise is to produce a Docker image that can be used to create containers that run the Table Tennis Table app in a production environment.
Create a new file (called Dockerfile
) in the root of your code repository. We’ll include all the necessary Docker configuration in here. You can read more about dockerfile syntax here.
Create an environment file
One way to specify the values for environment variables within a Docker container is with an environment, or .env
, file. You can find details on the syntax of these files here. You should create a .env
file to specify a value for TABLE_TENNIS_LEAGUE_ROW_SIZE_CAP
.
Create a minimal Dockerfile
The first step in creating a Docker image is choosing a base image. We’ll pick one from Docker Hub. A careful choice of base image can save you a lot of difficulty later, by providing many of your dependencies out-of-the-box.
It’s quite an easy decision in our case, we have built a Node.js app, and Node provide a base Docker image here – just make sure you use an image with the correct version of Node.
When complete, you should have a single line in your Dockerfile:
FROM <base_image_tag>
You can build and run your Docker image with the following commands, although it won’t do anything yet!
$ docker build --tag table-tennis-table .
$ docker run table-tennis-table
Basic application installation
Expand the Dockerfile to include steps to import your application and launch it. You’ll need to:
- Copy across your application code
- Install dependencies
- Define an entrypoint, and default launch command
Keep in mind a couple Docker best practices:
- Perform the least changing steps early, to fully take advantage of Docker’s layer caching.
- Use
COPY
to move files into your image. Don’t copy unnecessary files. - Use
RUN
to execute shell commands as part of the build process. ENTRYPOINT
and/orCMD
define how your container will launch.
For help with doing this, Node provide a comprehensive set of instructions on how to containerise a Node.JS app which you can follow. You can of course skip the part about creating a Node.js app (we already have Table Tennis Table), and need go no further than the Run the image section. You also do not need to worry about exposing a specific port for the app to run on, as Table Tennis Table is a console application.
After updating your Dockerfile, rebuild your image and rerun it. You’ll need to use a couple of options with the docker run
command:
- the
-i
option to allow the container to read from your terminal - the
-t
option to allocate a virtual terminal session within the container
Once you’ve run the app, have a play around with it in the terminal to check it works. By default, Docker attaches your current terminal to the container. The container will stop when you disconnect. If you want to launch your container in the background, use docker run -d
to detach from the container. You can still view container logs using the docker logs
command if you know the container’s name or ID (if not, use docker ps
to find the container first).
When you’re running a web app (rather than a console app as we are) you will want to access it on localhost
for testing purposes. You can do this by using the EXPOSE
keyword in your Dockerfile
to specify that the container should listen on a specific port at runtime, and using the -p
option with docker run
to publish the container to the relevant port.
Environment variables
There is a potential security issue with our approach to environment variables so far. The .env
file could contain application secrets (it doesn’t in our case, but often will in the real world), and it is included in the Docker image. This is bad practice. Anyone with access to the image (which you may make public) can discover the embedded content.
It’s good practice for containerised applications to be configured only via environment variables, as they are a standard, cross-platform solution to configuration management. Instead of copying in a configuration file (.env
) at build-time, we pass Docker the relevant environment variables at runtime (e.g. with --env-file
). This will keep your secrets safe, while also keeping your image re-usable – you could spin up multiple containers, each using different credentials. Other settings that are not sensitive can also be varied between environments in this way.
Create a .dockerignore
file, and use to it specify files and directories that should never be copied to Docker images. This can include things like secrets (.env
) and other unwanted files/directories (e.g. .git
, .vscode
, .venv
etc.). Anything that will never be required to run or test your application should be registered with .dockerignore
to improve your build speed and reduce the size of the resulting images. You can even ignore the Dockerfile itself.
Even if you are being specific with your COPY
commands, create the .dockerignore
file anyway, because it’s important ensure no one accidentally copies the .env
file over in the future.
Note that any environment variables loaded as part of docker run
will overwrite any defined within the Dockerfile using the ENV
.
Try adding environment variables this way, and check that the app works.
Create a local development container
Containers are not only useful for production deployment. They can encapsulate the programming languages, libraries and runtimes needed to develop a project, and keep those dependencies separate from the rest of your system.
You have already created what’s known as a single-stage Docker image. It starts from a base image, adds some new layers and produces a new image that you can run. The resulting image can run the app in a production manner, but is not ideal for local development. Your local development image should have two key behaviours:
- Enable a debugger to provide detailed logging and feedback.
- Allow rapid changes to code files without having to rebuild the image each time.
To do this, you will convert your Dockerfile into a multi-stage Dockerfile. Multi-stage builds can be used to generate different variants of a container (e.g. a development container, a testing container and a production container) from the same Dockerfile. You can read more about the technique here.
Here is an outline for a multi-stage build:
FROM <base-image> as base
# Perform common operations, dependency installation etc...
FROM base as production
# Configure for production
FROM base as development
# Configure for local development
The configurations of a development and production container will have many similarities, hence they both extend from the same base image. However, there will be a few key differences in what we need from different the containers. For example, we might need a different command to run a development version of our application than to run a production version, or we might not need to include some dependencies in our production version that we do need in our development version. Note in the instructions on how to containerise a Node.js app it states that your Dockerfile
should contain the following:
RUN npm install
# If you are building your code for production
# RUN npm ci --omit=dev
This is just one example of the differences between a development and a production container that you could reflect in your multi-stage Dockerfile.
The goal is to be able to create either a development or production image from the same Dockerfile, using commands like:
$ docker build --target development --tag table-tennis-table:dev .
$ docker build --target production --tag table-tennis-table:prod .
Docker caches every layer it creates, making subsequent re-builds extremely fast. But that only works if the layers don’t change. For example, Docker should not need to re-install your project dependencies because you apply a small bug fix to your application code.
Docker must rebuild a layer if:
- The command in the Dockerfile changes
- Files referenced by a
COPY
orADD
command are changed. - Any previous layer in the image is rebuilt.
You should place largely unchanging steps towards the top of your Dockerfile (e.g. installing build tools), and apply the more frequently changing steps towards the end (e.g. copying application code to the container).
Write your own multi-stage Dockerfile, producing a two different containers (one for development, one for production) from the same file.
Run your tests in Docker
Running your tests in a CI pipeline involves a lot of dependencies. You’ll need the standard library, a dependency management tool, third-party packages, and more.
That’s a lot, and you shouldn’t rely on a CI/CD tool to provide a complex dependency chain like this. Instead, we’ll use Docker to build, test and deploy our application. GitHub Actions won’t even need to know it’s running JavaScript code! This has a few advantages:
- Our CI configuration will be much simpler
- It’s easier to move to a different CI/CD tool in future
- We have total control over the build and test environment via our Dockerfile
Add a third build stage that encapsulates a complete test environment. Use the image to run your unit, integration and end-to-end tests with docker run. You already have a multi-stage Docker build with development and production stages. Now add a third stage for testing. Call it test. In the end you’ll have an outline that looks like the one below:
FROM <base-image> as base
# Perform common operations, dependency installation etc...
FROM base as production
# Configure for production
FROM base as development
# Configure for local development
FROM base as test
# Configure for testing
Build and run this test Docker container, and check that all your tests pass.
Set up GitHub Actions for your repository
GitHub Actions is totally free for public repositories. For private repositories, you can either host your own runner or you get some amount of free time using GitHub-hosted runners. This is broken down in detail in their documentation, but even if your repository is private, the free tier for should be plenty for this exercise.
Switching on GitHub Actions is just a matter of including a valid workflow file. At the root of your project, you should already have a .github
folder. Inside there, create a workflows
folder. Inside that, create a file with any name you want as long as it ends in .yml
. For example: my-ci-pipeline.yml
. This file will contain a workflow and a project could contain multiple workflow files, but we just need a single one for the Table Tennis Table app.
Here is a very simple workflow file, followed by an explanation:
name: Continuous Integration
on: [push]
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v 2
- run: echo Hello World
The top level object specifies:
name
– a display name for the workflowon
– when the workflow is triggeredjobs
– a collection of jobs. For now there is a single job calledbuild
. This could be whatever name you want
That job
specifies:
name
– a display name for the jobruns-on
– which GitHub-hosted runner the job should usesteps
– a list of steps that will execute in order. Each step eitheruses
an action orrun
s a shell command. An action is a reusable chunk of functionality that has been defined elsewhere. Other details can optionally be configured for a step – see here.
All this example workflow file does is checkout the code and then print Hello World
.
Try using the example config above in your own yaml file. Commit and push the code, then check that the build ran successfully. To check the output of your build, go to your repository on the GitHub website and open up the Actions tab. The URL should be of the form https://github.com/<your_username>/<your_repository>/actions
. Click on a workflow run for details. Within a run, you can view a job. Within a job, you can expand the logs of each step.
Build your code
Replace the echo
command with the correct command to build your project’s test Docker image (target the test stage). Check that the image build is triggered, and completes, successfully whenever you push to your repo.
Note that the GitHub runner already has the Docker CLI installed. If you are curious, look at the documentation for details about what GitHub-hosted runners are available and what software each one has installed. But all we need from our runner (ubuntu-latest) is Docker.
Check that the tests run successfully whenever your pipeline is triggered.
Run the tests
By now you should have your tests running successfully in Docker. You will now update the CI pipeline to run those tests.
Add one or more steps to your workflow file in order to run the unit and integration tests. You should run the tests via Docker (i.e. docker run ...
commands). Don’t try to install your project dependencies or execute tests directly on the GitHub runner itself.
Update the build trigger
The on
section of the workflow defines when job will run. Currently we are only building on push
, which means whenever any branch is updated. Another option is on pull request
which runs for open pull requests, using a version of the codebase where merge has already completed. Try changing the settings for your job so that it run on both push and pull request.
In general, building both branches and pull requests is useful, as it tells you both if something is wrong with the branch and if something would be wrong with the target branch once a pull request has been merged.
Try adding a paths-ignore
setting (to both the push
and pull_request
options) to avoid running the build when no relevant files have changed. For example, if the README.md
is updated, there’s no need to run the workflow.
Build Artefacts
Now you will expand this CI pipeline to publish build artefacts.
You will expand your CI pipeline to build production images and push them to Docker Hub. A public Docker registry (such as Docker Hub) is a great place to share build artefacts for open-source projects, as it’s extremely easy for anyone running Docker to download, run and integrate with your app.
Add a second job
Keep in mind that your pipeline should only push production images from the main
branch. We always want to run tests but do not want to publish build artefacts from in-development feature branches. To achieve this, create a second job in your workflow that will build and push an image to Docker Hub. This second job can then be configured to run less often.
Adding a second job to your workflow yaml file means adding another item to the jobs
object. It will result in a structure like this:
jobs:
job-one:
name: Job One
...
job-two:
name: Job Two
...
Give it a placeholder step such as run: echo "Publishing!"
We want this second job to run only if the first job succeeded, which can be achieved by configuring needs: job-one
(replacing job-one
with the actual ID of your test job).
We also want this second job to run only for pushes and only for the main
branch. This can be achieved with an if option that checks the values of both github.event_name
and github.ref
are correct.
Check that your second job is triggered at the correct times once you’ve configured it.
Docker Login
Before you can push images to Docker Hub, the first step will be to log in. On your local machine you can simply run docker login
and log in interactively, but you’ll need to handle this slightly differently on a CI server.
- Add your Docker Hub password (or access token) as a secret value in your GitHub repository. The username can be added as a secret alongside the password, or just hardcoded in the yaml file.
- Add a step to your job which either uses a suitable GitHub Action or runs the
docker login
command directly. Either way it will reference the secret password.
- You can find an action along with its documentation by searching the Marketplace.
- If you are running the shell command, you need to run it non-interactively, using your environment variables to supply the username and password. See here.
Build and Push
To recap, the basic commands for building and pushing your application to Docker Hub should look like:
$ docker build --target <my_build_phase> --tag <image_tag> .
$ docker push <image_tag>
where <image_tag>
has the format <user_name>/<image_name>:<tag>
.
Modify your second job to build and push your application to Docker Hub instead of just echo
-ing.
Make sure you set appropriate image tags! The most recent production image needs to be tagged latest
, which is the default tag if you don’t specify one. If you want to keep older images – often good practice – you’ll need to tag each build uniquely. Teams often tag images with the git commit hash so they are easily identifiable. You could do this with the default environment variable $GITHUB_SHA
.