The Case for Containers


3 mins


You may have walked away from my last post with the impression that I don’t want to use containers. Today’s post addresses that -

Containers increase developer productivity & reduce or eliminate environment drift through the release promotion cycle

That’s not meant to sound like consultant doubletalk, but it does. Here’s what I mean:

Speed & Agility

When a dev team takes the upfront time to settle on a single toolchain & set of supporting processes, this can often (always?) be captured in a single container.

A dockerfile can be written that provisions the right stuff, a layer at a time, and runs any config steps necessary for IDEs, such as cloning repos etc. In most cases, this work can be done once, with the resultant image being stored in your local private container registry.

When a new developer joins the team, she can pull & run that image and immediately have a functional environment.

No more “…I think you need the 123 version of the Flubster library. Ask Horace if he remembers”.

Stuff like that is dumb and shouldn’t exist. Containerized dev environments make those issues go away. That’s a good thing and is the #1 use case for containers. There’s a real benefit that’s observable, measureable & directly contributes to productivity, not to mention keeps the new team member happy.

Environmental Consistency

Keeping the runtime environments as similar as possible from dev to staging or QA and into production has been a challenge for years. That great new application or feature - or even a single bugfix - may behave differently between these environments when, surprise, the environments differ. (a round of applause to Captain Obvious)

Jokes aside, this is a really expensive problem that exists everywhere. By containerizing your entire application, including web & app servers, standalone databases, config files, etc, you’ve ensured that the app itself - as a whole - will behave identically across any system where it is deployed into the same container runtime.

That’s clearly a big benefit, but there’s a caveat or three that you must consider.

  1. You have to run a container runtime in production

  2. Your environments must consist of generally homogeneous systems such that the runtime, available memory & CPU and disk space for volumes works from env to env

  3. If your application is even vaguely complex and has stuff like monitoring agents installed, you’ll have to do deploy-time config changes so the prod agent isn’t monitoring QA or whatever (it happens)

  4. If your application consists of multiple service-per-containers, then you’ll have to invent a bunch of stuff to manage the comm.

With regard to number 4 above, even if you use k8s, you will have to define how each of these services communicates with the rest of the services. There’s no magic here.

You’ll have to decide for yourself what the config-pain threshold is, but it should be low.

Do the math.

I mean that seriously - calculate how much time is spent on managing this complexity and then compare it to whatever cost-savings you derive from having a single container (or set) promoted from the left (dev) to the right (production).

- jbminn