Docker Shenanigans: Part II

For my Docker adventures, I opted to install Docker Toolbox on my MacBook Pro running Yosemite. Even though Docker Toolbox includes VirtualBox, I had an existing VirtualBox installation so I used that to host my Docker Engine instance instead of using docker-machine in Toolbox. My decision was based on the fact that I wanted to be able to test Docker services running on different host OS’s like CentOS, Ubuntu, and CoreOS using the same client software on my MBP. So, I built an Ubuntu Server 16.04 VM and configured Vagrant to use that image.

Using Ubuntu 16.04 LTS for Docker Engine, or Why Is Service Control So Wonky?

One pitfall that I ran into when installing Docker services on my Ubuntu VM was that the default install uses a local socket instead of TCP for daemon access. Since I wanted to be able to communicate with Docker from other nodes on my home network, I needed to change the default startup configuration. As it turns out, this is annoyingly less than straightforward because of the inconsistent state of service configuration for systemd on my VM.

Systemd is the current framework for service controls on Ubuntu and other distros like CentOS. However, the implementation is relatively new and has some gotchas that required creating an alternative configuration to change the DOCKER_OPTS parameter which is the recommended way of controlling Docker service advertisement.  In my case, I needed to do the following:

mkdir /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/docker-tcp.conf

In docker-tcp.conf, I used this syntax to configure TCP communications with my Docker VM:

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --host=tcp://192.168.0.50:2375 --host=fd://

where 192.168.0.50 is the exposed interface on my home network for the Docker Engine. To activate this configuration, I had to tell systemd to reload and restart my Docker instance:

systemctl daemon-reload
systemctl restart docker

After this, I set the DOCKER_HOST environment variable on my client node

export DOCKER_HOST=192.168.0.50:2375

and was then able to connect to Docker Engine in the Ubuntu VM from my native OS X Docker client:

[rcrelia@fuji ~]$ export DOCKER_HOST=192.168.0.50:2375
[rcrelia@fuji ~]$ docker info
Containers: 4
 Running: 0
 Paused: 0
 Stopped: 4
Images: 3
Server Version: 1.12.2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 24
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay host null bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-43-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 992.4 MiB
Name: ubuntu
ID: H3HJ:MMBL:4S3N:56X7:JW2P:AUC6:6XRT:UNV4:KS2Q:UNDM:JXJ3:5MSH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8
[rcrelia@fuji ~]$

At this point, I now have a working Docker Engine VM that is independent of the client software installed on my laptop via Docker Toolbox.

Docker Shenanigans: Part I

I recently decided to spend some time playing with Docker. As much as I appreciate the theory behind container-based deployments, I haven’t really sat down and explored it since the days when Sun first introduced their container virtualization technology.

The Docker concept is similar to Sun’s original offering: create relatively lightweight runtime images of an application for ease of deployment across hosts, especially in development and testing environments. These environments historically have been a place of fewer resources in even the most successful of businesses and organizations, so anything that provides logical separation of applications while utilizing more of the available compute resources makes good sense.

In addition to driving more efficient usage of development compute resources, container technologies have matured in the last ten years to the point of being rather sophisticated and useful for production workloads. While Docker is really an application management framework based on containers, its ease of use and rich features have been very disruptive and responsible for a surge in the interest in container virtualization in the last couple of years. Other container-based application frameworks to be aware of include Canonical’s LXC/LXD and CoreOS’s rkt.

The benefits of container-based computing are straightforward:

  • more consistent application deployment – fewer artifact dingleberries hanging on from the last deployment or three
  • portability of applications – changes are easily tracked, tested, and distributed in a flexible manner across many hosts with minimal manual handoffs between humans in different roles (e.g., dev vs. ops)
  • fits well with CI/CD models in Agile environments – containers are often used to deploy microservices
  • clusterable-by-design architecture – integrates with existing configuration management frameworks and HA designs in public cloud platforms

So, after some reading of Matthias & Kane’s “Docker: Up and Running” book from O’Reilly Associates, I was ready to take the plunge and create a Docker environment capable of supporting multiple hosts to deliver container images and hopefully explore the capabilities of cluster management via Docker Swarm.

In Part II, I’ll talk about how I configured my Docker R&D lab at home:

  • Installation of Docker Toolbox on my OS X laptop
  • Settling on a distro for Docker Engine (the Engine is Linux-based)
  • Configuration of Docker services using VirtualBox and Vagrant