Sam's blog

DevOps Engineer @ Ghost. [+] Menu


Architecting a development environment with Docker

I’ve been learning a lot about containerisation lately. My usual problem is jumping onto something because it’s interesting, then trying to fit every task I have into that solution. However when it’s come to containerisation, it seems to have happened very naturally.

At work I’ve been optimising the developer environments for a set of services that make up Ghost Pro. They’re not exactly micro services, with each one being a substantial piece of software with a reasonable amount of complexity. None of the services had debug configurations and each would only run if a set of instructions was followed. As these programmes all run in the cloud, some Platform-as-a-Service (PaaS) options were used too, which were replicated for development with local equivalents. On top of that, a bunch of config flags disabled all of the interoperability of the software, and even though each system has a strong test coverage, unit testing failed to cover all edges of each service, meaning that most real testing happened in staging.

Coming from the outside, the ideal situation seemed to be that for any one service, you would spin up all of the services it communicates with so that they could all run at once. I set to work and soon found that each repository had its own Dockerfile to build an image, and a docker-compose.yml file to pull the dependencies and use the working copy of the current project. This worked to a certain extent, but there were multiple commands that needed to be run in the context of current project, so a Makefile was introduced to simplify those operations. Ultimately this lead to a Makefile starting docker-compose, invoking yarn to start a gulp build. The complexity was maddening.

After completely rethinking the design, I came up with a better idea. Instead of polluting each repo with the configuration of all of its peers, a single repo could be used to orchestrate the setup and development of all of the services. The plan was to use some kind of script to pull all of the projects, a configuration file to state whether each project was to be used directly from a registry, or whether to replace the app code in the container with the working copy (this option could also enable debug ports, and hot-reloading on some projects). This config would be used to generate a docker-compose file, which would be immediately consumed by the script, followed by the execution of any setup scripts. As all of the software was written in Node.js, this was also used for the “build system”. Yarn was used to execute the various entry points, and no need was found for Makefiles at all.

This system not only cleared the clutter from other repos, reduced the duplication of configuration, it also immediately improved the workflow for development. Now the whole system could be searched for functionality, and any part of it could be executed locally with a single line of config changed. In terms of the implementation, a handlebars template was used to write the compose file, which was compiled with the config data whenever the project was started up.

Suddenly, new features such as debugging individual projects could become issues for the system as a whole, adding a single VSCode directory for the whole project to manage editor features and shortcuts. This is still a work in progress, but the initial time to get a full system ready was around 2-3 weeks including all of the learning process and many iterations.

For me, this has taken my understanding of Docker for a wild ride. Starting as just a tool to simplify development, I am beginning to see how the production environment could be drastically simplified using similar techniques.

Written by Sam Lord. Published on .