The best thing about GitHub Actions is, of course, the light they shine on how badly we’ve let ourselves, as an industry, be led down the garden path to do battle with some fundamental flaws in how we’ve been taught to do deployment (and, hence, development) over the last several years.

A (Brief) History Overview

This year marks 30 years since the publication of what Wikipedia notes as the “earliest known work on continuous integration”, or CI. It took a decade or two for it to become more widespread than controversial, but the vast majority of shops that a 2019-era developer should consider submitting her CV to now practice some form of CI, and a sizeable portion of those also practice automated continuous delivery, if not fully continuous deployment. DevOps, which has been a common practice and/or a marketing buzzword for a decade now, is properly a superset of continuous deployment (CD).

We, as a craft industry, have a disastrous track record of integrating new practices into common workflows sufficiently well to see how evolving those workflows further might be impacted by the (current) capabilities and (current) limitations of tools widely used for those workflows. A prime example of this is many shops’ experience with CI/CD and Docker.

When Docker first became widely popular, circa 2014-2015, most shops that had previously invested in automated CI and CD “rolled their own” scripts and tools for Docker-based testing and deployment. A number of both open-source and commercial projects were created or adapted to support Docker-based workflows, where the CI workflow built an image (usually a set of images; more on that later), run automated unit and acceptance tests against those images and, if successful, the CD workflow would deploy the image to a newly-provisioned server (usually a VPS instance) which could then be subject to further automated and/or manual testing in a staging environment before being shifted into production using the CI-built image.

What’s Different and What Matters

A key point here that differentiates Docker-based workflows from many alternative CI/CD workflows is that deploying the as-built-by-CI image(s) at the end of a successful chain of workflows is seen as a best practice. That serves as a guarantee that the software being deployed is bit-for-bit identical to the software that successfully passed build, unit, integration, and staging tests. This is what leads to multiple reports in the literature of organisations “deploying 50 times a day” or this 2011 presentation (slides, YouTube discussing Amazon’s “velocity culture”; at the time, their mean time between production deployments was 11.6 seconds, with a maximum deployment per hour of 1,079 hosts. (The 2019 numbers are, no doubt, considerably higher.)

Your startup is not going to be the next Amazon (this quarter). My startup is not going to be the next Amazon (this year). But stories like theirs are salutary because they show what is possible, whereas the number and variety of “50 deployments per day” reports show what is practical. And yet, as several often-critical observers have noted, “90%+ of automated CI and CD tools in production are bespoke”, implying a massive, and redundant, effort by (in practice) well-resourced and -led teams investing the staff-hours in reliable, repeatable testing and deployment. Thirty years ago, one of the favourite grey-beard horror stories of development was when the deputy junior assistant intern accidentally deployed his (usually partial) build to production, whereupon (expensive, career-limiting) hilarity ensued. One point repeatedly made by many of those “50 deploys” write-ups is that anybody trusted to write code is authorised to kick off a CI/CD workflow to test and deploy that code. Possibly to staging, but often to revenue production. This can only make business sense if the process (particularly testing) is proven to be so reliable that any problem not caught by tests is seen primarily as a defect in the testing/deployment process, for which the entire team is responsible — not simply the most junior developer wondering if he’ll keep his job long enough to collect a paycheque.

So What’s Wrong?

What’s wrong is that, in official Docker best practices, application deployments will usually be made up of multiple containers, with each addressing a single concern (the application itself, the database/persistence layer, caching, static asset serving, HTTPS termination and reverse proxying, etc). The long-time “standard” way to orchestrate such collections of containers is through Infrastructure as Code tools like docker-compose or, alternatively, Red Hat Ansible or HashiCorp Terraform. Each of these tools have been used successfully in numerous “bespoke” CI/CD toolchains (their primary purpose). Getting them to work with off-the-shelf automation tools like GitLab CI/CD and GitHub Actions, to the point where they can be trusted for use by the aforementioned extremely junior developer, has proven to be “an interesting challenge”, to put it mildly.

  • GitHub Actions presently prohibits creation of Docker networks, which is a foundation-level requirement for Docker Compose. See this GitHub issue comment.
  • Ansible has been used in earlier test-application deployments, but API changes appear to have broken the tool which it uses to deploy to our preferred hosting provider, Digital Ocean. That has been left as a fix “for the community to implement”. To Ansible’s credit, they’ve recognised the potential value of restructuring the Ansible project. That document talks a good game, and indeed makes it sound like they’ve recognised some of their more glaring challenges, but we simply haven’t had the resources to evaluate whether the show-stopper-level problems encountered with Ansible 2.6 remain in the current 2.8 (with its accumulated other API changes). The greatly-expanded list of modules for Ansible 2.8 to support Digital Ocean, and the far more clearly-documented docker_compose module are promising, however;
  • Terraform looks amazing, and the support team have been regularly praised in the GitHub Issues for the project. On the other hand, there are 1,466 open issues (and 11,367 closed issues) on that repository, and this writer as yet has not yet even attempted use of the tool. That needs to change;
  • GitLab CI/CD has been shown to work quite well for applications developed from the ground up to evolve within the limitations and foibles of the tool. Adapting an existing Docker Compose-orchestrated application which requires multiple containers to be active in all use cases has been successfully tested to the point of bringing up the container constellation (docker-compose up -d), but no testing has yet been done of any actual container/app tests or deployment beyond that. The available documentation and YouTube tutorials lead this writer to require comprehensive testing and configuration exploration before seriously considering it for professional* use.

“Professional” in this context alludes to the context described by the debatably-anonymous quote; loosely,

An amateur practices until he gets it right. A professional practices until she can’t get it wrong.

In our context, “can’t get it wrong” largely implies “understands the tools and their use well enough to be able to accurately predict what changes can be made successfully, what changes are certain to fail, and where further experimentation might be valuable”. That is far beyond “getting it right” once or in a single configuration, and seems to this writer to be a minimal confidence level for code intended for revenue production.

What Works for You Folks?

If anybody has any experience reports (or, better yet, can point me towards good examples) of how to set up automated CI/CD for a Ruby+Postgres+Redis app using any of these tools, I’d really appreciate the help (and will be quite happy to amend this to gratefully acknowledge such assistance). I last built a CI toolchain about three years ago. The ecosystem has definitely changed, and not in a way that benefits the sole developer/small team.


Jeff Dickey

Software and Web developer. Tamer of deadlines. Enchanter of stakeholders.