[Opinion] Did Docker become pretty irrelevant? (and I’m only just noticing)

28th August 2018 0 By Jonny

Somewhat click-baity post title … but bearing in mind my blog doesn’t get any traffic, I don’t suppose that really matters. This is also 100% completely my own opinion based on nothing but my own current thoughts. That means that in about a year or so I can look back and cringe at what I’ve written…

Where Docker came from

Docker was first unleashed to the world back in March 2013 – yep, five and a half years ago. Docker 1.0 made it by June 2014. It’s fair to say that Docker definitely monopolised the conversation around container technology at that time. It’s also fair to say that containers weren’t exactly anything new. But then, nothing’s new in technology really … My view is that Docker did it (it being containerisation) really well. Docker, through Dockerfiles, made what could seem a very complicated concept remarkably straightforward and really quite easy.

It is my belief that containers are a good example of a disruptive technology. But, perhaps, not disruptive enough on their own?

With Docker it was trivial to create a container image. With the Docker daemon running on a Linux system, downloading, and running these container images was also made easy. The Docker image registry was a single, central location where you could access literally thousands of ‘pre-packaged’ applications. With more being added daily.

The Docker hub was free to set up an account on, and you could link your Docker account to your Github (or BitBucket) account and, Docker would automatically build and host your container images. You can then pull these images to your hosts running the Docker daemon. Docker saw off potential competition from Linux Containers (lxc) and CoreOS rkt. Again, in my opinion, Docker basically became the de facto standard container run time, and Docker Inc. (yep, by this stage they’d incorporated) started to build out a whole environment for running Docker containers at scale. To a certain extent, running containers on Linux meant you were running Docker, and there’s no doubt Docker Inc. wanted to be in that position.

All the major Linux distributions started shipping Docker packages, which meant that you could end up running applications packages for Ubuntu on RHEL and vice versa. Coupled with the tooling that Docker Inc. was developing it looked like the future would be rosy for Docker. So much so, that Docker Inc. took the decision to split off the open source docker software into a project called ‘moby’, whilst Docker Inc. would retain ownership of docker-ce and docker-ee packages.

This looked, to me anyway, like a bit of a land-grab from Docker – they’d released the docker software, and being open source had helped it gain traction and market-share. Now, it appeared as though Docker Inc. wanted to assert (more) control over the docker software. Which could have worked. Except there had been a development, or rather release, that would be truly disruptive.

It happens when you’re not paying attention

Whilst Docker Inc. were busy trying to build up their ecosystem of tooling round the docker software, it turned out that elsewhere someone was about to make a huge code drop of software. As mentioned earlier, containers aren’t new, and it turns out some companies have been doing this at scale for years, and they may have learnt a few lessons along the way. The main lesson seemingly being, the container itself isn’t really very important. When you’re running containers at scale, and particularly at Internet behemoth scale, it turns out single containers aren’t really very important or relevant. What matters is that you have enough containers available to meet your needs and that these can be scaled to meet your future needs (whether those needs are greater or lesser than current).

Yeah – in July 2015, Google dropped Kubernetes on the world. Google had been running large numbers of containers on a daily basis for years, and had been destroying large numbers too depending upon demand. Internally, they had an orchestration engine called ‘Borg’. Apparently, some Google engineers took the concepts of Borg and applied it to Docker containers to produce kubernetes. Vast over-simplification there …

The release of kubernetes didn’t change everything immediately, but it certainly seemed to shift the conversation from containers to orchestration. By releasing kubernetes to the Cloud Native Computing Foundation, Google also demonstrated they weren’t holding onto the code (like they do with Android, for example). It might well be the case that Google engineers provide the highest number of commits and code to kubernetes, so they undoubtedly retain a degree of influence. However, other companies also contribute, and it has truly become a community project.

Once again, in my opinion, kubernetes will prove to be the disruptive technology, not docker. The docker software will simply become a component of kubernetes, and it will be (already is) a component that can be swapped out easily.

Why is Docker irrelevant?

Having kick-started the conversation round containers, Docker Inc. have been somewhat sidelined. The real conversation is orchestration, and as of right now, that’s kubernetes. Docker themselves have incorporated kubernetes into their software. Where Docker Inc. once looked to be the new kid on the block with the world to take over, now they’re simply another kubernetes vendor. And that’s a crowded marketplace. Where they don’t have very much to set them apart.

Kubernetes is now available as a service on all the major public clouds, and there are several vendors offering productised Kubernetes offerings (Red Hat OpenShift, Rancher, Pivotal, and Canonical). These might well use docker as the container runtime, but docker is just a commoditised component. The value lies elsewhere, and that is why I think Docker Inc. is irrelevant. They could end up being collateral damage from the kubernetes release.