Ansible Shenanigans: Part II – Sample Playbook Usage

In Part I, I talked about why Ansible and how to configure your own installation using Vagrant, VirtualBox, and Ansible. Now, let’s take a closer look at using Ansible along with the details of my demo playbook collection ansible-mojo.

Once Ansible is up and running, it is extremely useful for managing nodes using ad-hoc commands. However, it really shines once you start developing collections of commands, or “plays”, in the form of “playbooks” to manage nodes. Similar to recipes and cookbooks in Chef, Ansible’s plays and playbooks are the basis for a best-practice implementation of Ansible to manage your infrastructure in a consistent, flexible, and repeatable fashion.

For ansible-mojo, I wanted to create a set of simple playbooks that would be helpful in demonstrating how to configure nodes with some basic things like:

  • a dedicated user account “ansible” for deployment standardization,
  • installation of standard packages,
  • management of users and sudoers content

Initial Ansible Playbook Run

The anisible-mojo repo contains several files: playbooks, a variables file, and a couple of shell environment files. All playbook content is based on YAML-formatted text files that are easily understandable. I opted to have a single primary playbook (main.yml) that does some initial node configuration then includes other playbooks for specific configuration changes like configuring users (user-config.yml) and installing sysstat for SAR reporting (sysstat-config.yml).

Before I go into details on each of the playbooks, let’s go ahead and do an initial playbook run against our Ubuntu Vagrant box so that we can issue further commands using our dedicated deployment user account “ansible” instead of the “vagrant” user.

NOTE: Be sure that you change authorized_keys in ansible-mojo to contain the public key that you configured your ssh-agent to use for deployment as mentioned in Part I.

In this case, I am using a Vagrant machine called “myvm” and will specify the -e override for ansible_ssh_user to ignore the remote_user setting in ansible.cfg:

[rcrelia@fuji ansible]$ ansible-playbook main.yml -e ansible_ssh_user=vagrant

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [myvm]

TASK [Ensure ntpdate is installed] *********************************************
changed: [myvm]

TASK [Ensure ntp is installed] *************************************************
changed: [myvm]

TASK [Ensure ntp is running and enabled at boot] *******************************
ok: [myvm]

TASK [Ensure aptitude is installed] ********************************************
changed: [myvm]

TASK [Update apt package cache if older than one day] **************************
changed: [myvm]

TASK [Add user group] **********************************************************
changed: [myvm] => (item={u'user_uid': 2000, u'user_rc': u'bashrc', u'user_profile': u'bash_profile', u'sudoers': True, u'user_groups': u'users', u'user_gecos': u'ansible user', u'user_shell': u'/bin/bash', u'user_name': u'ansible'})
changed: [myvm] => (item={u'user_uid': 2001, u'user_rc': u'bashrc', u'user_profile': u'bash_profile', u'sudoers': False, u'user_groups': u'users', u'user_gecos': u'Bob Dobbs', u'user_shell': u'/bin/bash', u'user_name': u'bdobbs'})

... SNIP ...

TASK [Install sysstat] *********************************************************
changed: [myvm]

TASK [Configure sysstat] *******************************************************
changed: [myvm]

TASK [Restart sysstat] *********************************************************
changed: [myvm]

PLAY RECAP *********************************************************************
myvm : ok=17 changed=15 unreachable=0 failed=0

Success! Now, we should have the “ansible” user account provisioned on the Vagrant machine and we will perform all future Ansible plays using that account as specified in ansible.cfg in the remote_user setting.

A Closer Look at Playbooks

ansible-mojo contains several files, with all Ansible syntax included in the YAML files:

  • main.yml
  • vars.yml
  • reboot.yml
  • Playbooks nested below main.yml:
    • user-config.yml
      • ssh-config.yml
      • sudoers-config.yml
    • sysstat-config.yml

Each of these files with the exception of vars.yml is an Ansible playbook. I created a primary playbook called “main” which in turn references a file containing miscellaneous variables (another YAML file called vars.yml), along with two other playbooks, user-config (configures user accounts) and sysstat-config (configures SAR reporting). These latter two files are nested playbooks: their execution is dependent on syntax in the main playbook and the vars_file. Finally, user-config includes two playbooks, one for configuring SSH in user accounts and one for configuring sudo access.

At the beginning of the main playbook, we see that the plays are scoped to all hosts in Ansible’s inventory (hosts: all), that plays will be run as a privileged user on the nodes (become: yes), and that some variables have been stored outside of playbooks in a single location called vars.yml. This pattern of using vars_files allows you to have a single place for information that you may not want to distribute along with playbooks (e.g., user account details) for security reasons.

Next, Ansible tasks (or actions) are defined for installing and configuring ntpd on our nodes, along with aptitude, and a command to update the apt packages on a node if the last update was longer ago than 24 hours.

The nesting of playbooks is a pattern that supports reusability and portability of playbook content, provided you don’t hardcode variables in them. Let’s take a closer look at some of these nested playbooks.

Managing users: user-config.yml

Since user-config is a nested playbook, it consists of a sequence of tasks without any operating parameters like host-scoping or privilege/role settings. It does five things before calling its own nested playbooks at the end:

  1. Creates a user’s primary group using the user’s UID as the GID, via the Ansible group module
  2. Creates a user via the Ansible user module
  3. Creates a user’s .bash_profile via the Ansible copy module
  4. Creates a user’s .bashrc via the Ansible copy module
  5. Creates a user’s $HOME/bin directory via the Ansible file module

The syntax is pretty clear about what is happening if you have even the most basic sort of experience managing user accounts on a UNIX/Linux server. Isn’t Ansible awesome?

What may not be so clear is the syntax that uses the “item.” prefix in variable names. Basically, I designed the playbook to use with_items feature of Ansible so I could iterate through multiple users without duplicating a lot of syntax. The “{{ users }}” variable is a referencing a YAML list called users that is stored in the variables file vars.yml. Looking at that list, it becomes apparent that we are cycling through attributes of each user without hardcoding any user-specific variables in our playbook:

users list from vars.yml:

users:
 - user_name: ansible
 user_gecos: 'ansible user'
 user_groups: "users"
 user_uid: 2000
 user_shell: "/bin/bash"
 user_profile: bash_profile
 user_rc: bashrc
 sudoers: yes
 - user_name: bdobbs
 user_gecos: 'Bob Dobbs'
 user_groups: "users"
 user_uid: 2001
 user_shell: "/bin/bash"
 user_profile: bash_profile
 user_rc: bashrc
 sudoers: no

When you write playbooks in Ansible, you should design your plays as generically as possible so that you can re-use your playbooks across different projects and nodes.

Next, user-config includes the ssh-config playbook which has two tasks: Setup a user’s .ssh directory and the user’s authorized_keys content. In this case, each user is being configured to use the same authorized_keys data, which is probably not how you would configure things in an actual deployment from a security best-practices perspective.

Lastly, user-config includes the sudoers-config playbook which uses Ansible’s lineinfile module to specify sudoers syntax to allow for passwordless sudo invocation. We need this for our ansible account, which will be performing Ansible operations for us non-interactively. This play is special in that it is constrained to only be run when the user is supposed to be added to sudoers (via use of Ansible’s when clause). How is this controlled? Through the sudoers attribute from the users list in vars.html:

users:
 - user_name: ansible
 user_gecos: 'ansible user'
 user_groups: "users"
 user_uid: 2000
 user_shell: "/bin/bash"
 user_profile: bash_profile
 user_rc: bashrc
 sudoers: yes

Managing packages: sysstat-config.yml

One of the classic UNIX/Linux performance monitoring tools is sar/sadc. In the open-source world, sar is packaged within the sysstat tool. One of the first things I do on a new machine is to make sure sar is installed, configured, and operational. So, I created a playbook that installs and configures the sysstat package.

One neat tool in Ansible is the lineinfile module which is useful to make sure a specific line is included in a text file, or some pattern within a line is replaced via a back-referenced regular expression. In the case of sysstat, there is a config file on Ubuntu, /etc/default/sysstat, that ships with a default “off” configuration (i.e., ENABLED=”false”). I used the lineinfile module in sysstat-config.yml to change that line and activate sysstat:

---
 # Install sysstat for sar reporting
 - name: Install sysstat
 apt:
 name: sysstat
 state: present

 - name: Configure sysstat
 lineinfile:
 dest: /etc/default/sysstat
 regexp: '^ENABLED='
 line: 'ENABLED="true"'
 state: present

 - name: Start sysstat
 service:
 name: sysstat
 state: started
 enabled: yes

After the sysstat package is installed (task #1), and its configuration file modified (task #2), I tell Ansible to make sure sysstat is started and enabled to start on reboot via the service module (task #3).

BONUS Play: Interactive Ansible and Server Reboots

Everything you do with Ansible is typically designed to be non-interactive. However, there may be some things that it makes sense to have some sort of interactive processing for depending on your workflow. I thought it might be interesting if I could trigger a server reboot and pause an Ansible playbook until the server(s) all came back online. This is the purpose of the reboot.yml playbook. This playbook could be used after updating kernel packages on hosts, for example. It would need to be modified to add control logic if rebooting all hosts simultaneously in Ansible’s inventory is undesirable. If you want to constrain the run of this all-hosts scoped playbook to a single host in your inventory, you can use the –limit filter:

ansible-playbook --limit myvm reboot.yml

Summary

This wraps up my overview of ansible-mojo’s playbook content and organization. Hopefully by now, you recognize the power and value of Ansible and appreciate just how easy it is to use. In Part I, you learned how to arrange and use Vagrant, VirtualBox, and a source-based copy of Ansible to create a lab environment for your Ansible testing.

In Part II, you learned how to create and use a sequence of Ansible plays to achieve some very common systems deployment goals: creating a deployment user, managing users, distributing ssh authorizations, configuring sudo, and installing packages.

You’ve also learned how to nest playbooks and why you may want to consider stashing certain variables and configuration lists in a file separate from your playbooks.

By downloading ansible-mojo, you can start using Ansible on your own machine immediately, which was my goal for releasing it. I hope you find Ansible as much of a joy to work with as I do.

Future changes to ansible-mojo and accompanying blog posts may or may not include:

  • creating more distro-agnostic playbooks (e.g., plays that work for both CentOS and Ubuntu)
  • integration with Vagrant for local provisioning
  • development of Ansible roles for publishing to Galaxy

Until then, happy hacking and may Ansible make your world better! Cheers!!

 

Ansible Shenanigans: Part I – Initial Setup and Configuration

I’ve been spending time learning Ansible, the Python-based configuration management framework created by Michael DeHaan. There are two main features that make Ansible worth considering for your configuration management needs: ease of implementation via an agentless design (based on SSH), and a DSL that closely resembles traditional shell scripting command syntax. Ansible “plays” are very easily read and understood whether you are a sysadmin, developer, or technical manager. Having used both Puppet and Chef in the past, which require a client/agent installation, I truly appreciate how quickly one can deploy Ansible to manage servers with minimal overhead and a small learning curve.

One of the best resources I’ve found so far to aid in learning Ansible, in addition to the extensive and quality official Ansible documentation, is Jeff Geerling’s most excellent “Ansible for DevOps.” The author steps you through using Vagrant-based VM’s to explore the use of Ansible for both ad-hoc commands and more complex playbook and role-based management.

All of the work I’ve done with Ansible for this post is publicly available on GitHub, so feel free to clone my ansible-mojo repo and follow along.

Lab setup – Vagrant, VirtualBox, and Ansible

I use a mix of custom VirtualBox VM’s and Vagrant-based VM’s for all of my home devops lab work. For the purposes of this post, I am limiting myself to a Vagrant-based solution as it’s extremely simple and dovetails nicely with the approach in “Ansible for DevOps”. So let’s take a closer look…

I’m using Vagrant 1.8.6 and VirtualBox 5.1.6 (r110634) on my MacBook Pro running Yosemite (10.10.5 w/Python 2.7.11). Historically, most of my recent experience has been with CentOS and AmazonLinux, so I decided to refresh my knowledge of Ubuntu, choosing to use Ubuntu 16.04.1 LTS (Xenial Xerus) for my VM’s using the bento/ubuntu-16.04 image hosted at HashiCorp’s Atlas.

To get started, simply add the bento Ubuntu image to your Vagrant/VirtualBox installation. I store all my Vagrant machines in a directory off my home directory called “vagrant-boxes”:

mkdir ~/vagrant-boxes/bento_ubuntu
cd ~/vagrant-boxes/bento_ubuntu
vagrant init bento/ubuntu-16.04; vagrant up --provider virtualbox

At this point, you should have a working Vagrant machine running Ubuntu 16.04.1 LTS!

Note: I originally started this work using Canonical’s ubuntu/xenial64 official build images for Vagrant. However, I ran into an issue immediately that made provisioning with Ansible a bit wonky, namely the fact that the Canonical image does not ship with Python 2.x installed (Python 3.x is there but is not used for Ansible operations). Be advised of this as you setup your own Ansible sandbox with Vagrant.

Because I like to be able to SSH into my Vagrant machines from anywhere inside my home network, I modify the Vagrantfile to access the VM using a hardcoded IP address that I’ve reserved in my router’s DHCP table. The relevant line if you want to do something similar is:

config.vm.network "public_network", ip: "192.168.0.99", bridge: "en0: Wi-Fi (AirPort)"

I then use this IP address in my local hosts file, which allows me to use it via a hostname of my choosing within the Ansible hosts file.

Next, I had to install Ansible on my MacBook. I could have used the package found within Homebrew, but that version is currently 2.1.0 and I wanted to work from the most current stable release with is v2.2.0. So, I opted to clone down that repo from Ansible’s GitHub project and work from that source:

git clone git://github.com/ansible/ansible.git --recursive
cd ./ansible
source ./hacking/env-setup

The last step configures your machine to run Ansible out of the source directory from the cloned repo. You can integrate the environment settings it generates into your shell profile so that the pathing is always current to your installation. You should now have a working copy of Ansible v2.2.0:

[rcrelia@fuji ansible (stable-2.2=)]$ ansible --version
ansible 2.2.0.0 (stable-2.2 e9b7d42205) last updated 2016/10/20 10:00:56 (GMT -400)
 lib/ansible/modules/core: (detached HEAD 42a65f68b3) last updated 2016/10/20 10:00:59 (GMT -400)
 lib/ansible/modules/extras: (detached HEAD ddd36d7746) last updated 2016/10/20 10:01:02 (GMT -400)
 config file = /etc/ansible/ansible.cfg
 configured module search path = Default w/o overrides

Note: I keep my Ansible files in a directory under my $HOME location, including ansible.cfg, which is normally expected by default to be in /etc/ansible. While you can use environment variables to change the expected location, I decided to just symlink /etc/ansible to the relevant location in my $HOME directory. YMMV.

sudo ln -s /etc/ansible /Users/rcrelia/code/ansible

Using Ansible With Your Vagrant Machine

In order to use Ansible, a minimum of two configuration files need to be used in whatever location you are using for your work: ansible.cfg and hosts. All other content will depend on whatever playbooks, host config files, and roles you create. The ansible.cfg in my repo is minimal with the defaults removed. However, you can find a full version in there named ansible.full.cfg for reference. Additionally, you will want to make sure you have a working log file for Ansible operations, with the default being /var/log/ansible.log. The output from all issued Ansible commands are logged in ansible.log.

Since Ansible uses SSH to communicate with managed nodes, you will want to use an account with root-level sudo privileges that is configured for SSH access, and ideally one that is passwordless. I personally use a ssh-agent process to store credentials and make sure that I configure the nodes to allow access using that private key via authorized_hosts. Do whatever makes sense for your environment.

By default, the bento Vagrant machine ships with a sudo-capable user called “vagrant”, whose private SSH key can be used for the initial Ansible run. I added that key to my ssh-agent:

ssh-add ~/vagrant-boxes/bento_ubuntu/.vagrant/machines/default/virtualbox/private_key

At this point, I can now communicate with my Vagrant Ubuntu VM using Ansible over a passwordless SSH connection. Let’s test that with a simple check on the node using Ansible’s setup module:

[rcrelia@fuji ansible]$ ansible myvm -m setup -e ansible_ssh_user=vagrant|head -25
myvm | SUCCESS => {
 "ansible_facts": {
 "ansible_all_ipv4_addresses": [
 "10.0.2.15",
 "192.168.0.99"
 ],
 "ansible_all_ipv6_addresses": [
 "fe80::a00:27ff:feb5:5b5e",
 "fe80::a00:27ff:fe28:3f0"
 ],
 "ansible_architecture": "x86_64",
 "ansible_bios_date": "12/01/2006",
 "ansible_bios_version": "VirtualBox",
 "ansible_cmdline": {
 "BOOT_IMAGE": "/vmlinuz-4.4.0-38-generic",
 "quiet": true,
 "ro": true,
 "root": "/dev/mapper/vagrant--vg-root"
 },
 "ansible_date_time": {
 "date": "2016-10-26",
 "day": "26",
 "epoch": "1477494044",
 "hour": "15",
 "iso8601": "2016-10-26T15:00:44Z",
[rcrelia@fuji ansible]$

Note that I specify the -e option to specify the default Vagrant user for my Ansible session. This is an override option and is only required for the initial playbook run from ansible-mojo. Once we’ve applied our main playbook, which sets up a user called “ansible”, we can then use that user for Ansible operations going forward (as specified by our remote_user setting in ansible.cfg).

At this point, we have a working installation of Ansible with a single manageable Ubuntu XenialXerus node based on Vagrant. In Part II, I will cover the workings of ansible-mojo and discuss various details around playbook construction, layering of plays, etc.

 

Docker Shenanigans: Part II

For my Docker adventures, I opted to install Docker Toolbox on my MacBook Pro running Yosemite. Even though Docker Toolbox includes VirtualBox, I had an existing VirtualBox installation so I used that to host my Docker Engine instance instead of using docker-machine in Toolbox. My decision was based on the fact that I wanted to be able to test Docker services running on different host OS’s like CentOS, Ubuntu, and CoreOS using the same client software on my MBP. So, I built an Ubuntu Server 16.04 VM and configured Vagrant to use that image.

Using Ubuntu 16.04 LTS for Docker Engine, or Why Is Service Control So Wonky?

One pitfall that I ran into when installing Docker services on my Ubuntu VM was that the default install uses a local socket instead of TCP for daemon access. Since I wanted to be able to communicate with Docker from other nodes on my home network, I needed to change the default startup configuration. As it turns out, this is annoyingly less than straightforward because of the inconsistent state of service configuration for systemd on my VM.

Systemd is the current framework for service controls on Ubuntu and other distros like CentOS. However, the implementation is relatively new and has some gotchas that required creating an alternative configuration to change the DOCKER_OPTS parameter which is the recommended way of controlling Docker service advertisement.  In my case, I needed to do the following:

mkdir /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/docker-tcp.conf

In docker-tcp.conf, I used this syntax to configure TCP communications with my Docker VM:

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --host=tcp://192.168.0.50:2375 --host=fd://

where 192.168.0.50 is the exposed interface on my home network for the Docker Engine. To activate this configuration, I had to tell systemd to reload and restart my Docker instance:

systemctl daemon-reload
systemctl restart docker

After this, I set the DOCKER_HOST environment variable on my client node

export DOCKER_HOST=192.168.0.50:2375

and was then able to connect to Docker Engine in the Ubuntu VM from my native OS X Docker client:

[rcrelia@fuji ~]$ export DOCKER_HOST=192.168.0.50:2375
[rcrelia@fuji ~]$ docker info
Containers: 4
 Running: 0
 Paused: 0
 Stopped: 4
Images: 3
Server Version: 1.12.2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 24
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: overlay host null bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-43-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 992.4 MiB
Name: ubuntu
ID: H3HJ:MMBL:4S3N:56X7:JW2P:AUC6:6XRT:UNV4:KS2Q:UNDM:JXJ3:5MSH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8
[rcrelia@fuji ~]$

At this point, I now have a working Docker Engine VM that is independent of the client software installed on my laptop via Docker Toolbox.

Docker Shenanigans: Part I

I recently decided to spend some time playing with Docker. As much as I appreciate the theory behind container-based deployments, I haven’t really sat down and explored it since the days when Sun first introduced their container virtualization technology.

The Docker concept is similar to Sun’s original offering: create relatively lightweight runtime images of an application for ease of deployment across hosts, especially in development and testing environments. These environments historically have been a place of fewer resources in even the most successful of businesses and organizations, so anything that provides logical separation of applications while utilizing more of the available compute resources makes good sense.

In addition to driving more efficient usage of development compute resources, container technologies have matured in the last ten years to the point of being rather sophisticated and useful for production workloads. While Docker is really an application management framework based on containers, its ease of use and rich features have been very disruptive and responsible for a surge in the interest in container virtualization in the last couple of years. Other container-based application frameworks to be aware of include Canonical’s LXC/LXD and CoreOS’s rkt.

The benefits of container-based computing are straightforward:

  • more consistent application deployment – fewer artifact dingleberries hanging on from the last deployment or three
  • portability of applications – changes are easily tracked, tested, and distributed in a flexible manner across many hosts with minimal manual handoffs between humans in different roles (e.g., dev vs. ops)
  • fits well with CI/CD models in Agile environments – containers are often used to deploy microservices
  • clusterable-by-design architecture – integrates with existing configuration management frameworks and HA designs in public cloud platforms

So, after some reading of Matthias & Kane’s “Docker: Up and Running” book from O’Reilly Associates, I was ready to take the plunge and create a Docker environment capable of supporting multiple hosts to deliver container images and hopefully explore the capabilities of cluster management via Docker Swarm.

In Part II, I’ll talk about how I configured my Docker R&D lab at home:

  • Installation of Docker Toolbox on my OS X laptop
  • Settling on a distro for Docker Engine (the Engine is Linux-based)
  • Configuration of Docker services using VirtualBox and Vagrant