I recently decided to spend some time playing with Docker. As much as I appreciate the theory behind container-based deployments, I haven’t really sat down and explored it since the days when Sun first introduced their container virtualization technology.
The Docker concept is similar to Sun’s original offering: create relatively lightweight runtime images of an application for ease of deployment across hosts, especially in development and testing environments. These environments historically have been a place of fewer resources in even the most successful of businesses and organizations, so anything that provides logical separation of applications while utilizing more of the available compute resources makes good sense.
In addition to driving more efficient usage of development compute resources, container technologies have matured in the last ten years to the point of being rather sophisticated and useful for production workloads. While Docker is really an application management framework based on containers, its ease of use and rich features have been very disruptive and responsible for a surge in the interest in container virtualization in the last couple of years. Other container-based application frameworks to be aware of include Canonical’s LXC/LXD and CoreOS’s rkt.
The benefits of container-based computing are straightforward:
- more consistent application deployment – fewer artifact dingleberries hanging on from the last deployment or three
- portability of applications – changes are easily tracked, tested, and distributed in a flexible manner across many hosts with minimal manual handoffs between humans in different roles (e.g., dev vs. ops)
- fits well with CI/CD models in Agile environments – containers are often used to deploy microservices
- clusterable-by-design architecture – integrates with existing configuration management frameworks and HA designs in public cloud platforms
So, after some reading of Matthias & Kane’s “Docker: Up and Running” book from O’Reilly Associates, I was ready to take the plunge and create a Docker environment capable of supporting multiple hosts to deliver container images and hopefully explore the capabilities of cluster management via Docker Swarm.
In Part II, I’ll talk about how I configured my Docker R&D lab at home:
- Installation of Docker Toolbox on my OS X laptop
- Settling on a distro for Docker Engine (the Engine is Linux-based)
- Configuration of Docker services using VirtualBox and Vagrant