GitLab Workflow and VSC

I work almost exclusively these days in Visual Studio Code, for reasons blogged about previously. I also primarily use GitLab for SCM and today I discovered a nifty VSC extension, GitLab Workflow.

To use GitLab Workflow, simply add configuration to your user settings for your GitLab instance URL (if self-hosted) and a GitLab personal access token. This configuration is done automatically when you launch VSC the first time after installing the extension.

Some of the immediate benefits I’m realizing with the extension are:

  • ability to create a merge request (MR) for your current working branch,
  • ability to quickly inspect a CI pipeline via status bar indicator and then a simple click and select to open the pipeline in your browser
  • sidebar navigation for GitLab including issues and MRs

I love working in GitLab and VSC and now, especially with some custom keybindings in VSC, I am able to work even more productively within the VSC editor window without needing to switch over to my browser.

Empathy In The Last Place You’d Expect It

For *nix tech friends, who’d have thought we’d ever see this day?

“I am going to take time off and get some assistance on how to
understand people’s emotions and respond appropriately.”

               — Linus Torvalds, LKML, Sun, 16 Sep 2018 12:22:43 -0700

I’ve always chalked up Linus’ poor empathy skills to the demands of maintaining this amazing juggernaut he created and gave to the world freely. This was especially so in the early days when he was regularly on defense from others trying to do the same thing as he was, others who were older, more experienced, more respected, etc. (e.g., Tanenbaum, rms, the FreeBSD crew). He was but a young grad student, albeit an extremely industrious and clever one at that. But, he always has had a problem with empathy, and it was always just part of the landscape, perhaps even a “rite of passage” if you will, for becoming a kernel developer (I never was a kernel dev, but as a well-rounded system administrator and free software zealot, I followed the LKML closely for its first decade).

This was, of course, long before the concept of empathy ever entered into the fabric of professional technologists. Some of Linus’ infamous attacks on others who were helping him as volunteers were mind-blowingly abusive and the stuff of hushed conversation at conferences and banter at Linux User Group functions all over. In the late 1990’s, I was doing pro bono work for rms and the GNU Project, first as a volunteer evaluator then as the first coordinator of GNU software evaluators (all volunteers). We worked directly with Richard via email/IRC to review software submitted by developers to be considered for membership in the GNU operating system.

And, while rms could be highly challenging to work with/for, and he was somewhat regularly, he was never abusive. He pushed us hard and could be as stubborn as a mule, but he was always fair and, ultimately, respectful if you had a logical position that was different from his. I wouldn’t say, however, that he was ever “empathic”, because again… it wasn’t a concept that was part of the fabric of FOSS development at the time. It was a very different period in the history of Internet technology. You had to have fire-proof pants to be involved with a lot of what was going on at the time (remember flame wars?).

So, his latest LKML posting reveals a different Linus now, one whom I don’t think I ever expected to see, and it’s promising to see this evolution (if that’s what it is) and what it may mean for the future of his interface with the kernel development community. I’m glad to see this day come and hope it makes Linux even more successful than it already has been. Good luck, Linus!

Pythonic Moments

I’ve been heads-down with development work for months it seems, a lot of it in Python, and found a couple of great resources recently that I wanted to share.

If you code in Python and haven’t already heard of Dan Bader or his book, “Python Tricks: A Buffet of Awesome Python Features“, check both of them out. Dan has a series of helpful YouTube videos as well, some of which cover Python Tricks material.

If you listen to podcasts, check out Michael Kennedy‘s “Talk Python To Me” series. He also does a more concise, headline-oriented podcast called “Python Bytes

Huzzah, I’m AWS Certified!

I passed the AWS Certified Solutions Architect – Associate exam on 12/11 and the AWS Certified Developer – Associate exam on 12/15. Woohoo!

awsCertSolArchAssocawsCertDeveloperAssoc

Python’s logging() module in a boto3/botocore context

Python’s logging module provides a powerful framework for adding log statements to code vs. what might be done via using print() statements. It provides a system of logging levels similar to syslog-style levels that can be used to produce both on-screen runtime diagnostics as well as more detailed logs with full debug level insights into per module/submodule behavior.

Managing usage of logging() can be complicated, especially around the hierarchical nature of the log streams that it provides. I have developed a simple boto3 script that integrates logging to illustrate a basic usage that is easy to adopt and, in the end, not much more work than using print() statements. For detailed information on logging beyond what I present here, consult the excellent Python docs on the topic, as well as the links in the References section at the end of this post.

Logging Configuration

The setup for logging() that I am using involves two configuration files, logger_config.yaml and logger_config_debug.yaml. The difference between the two files has to do with the log levels used by the log handlers. By default, the example module deployVpc.py uses the logger_config setup. This config will produce no screen output by default except at the ERROR level and above. It produces a log file, however, that contains messages at the INFO level for the module and at the WARNING level for boto-specific calls.

Note: boto (including botocore) ships with some logging() active at the INFO level. While not as detailed as DEBUG, there’s enough busyness to that level of logging by boto that you will likely want to not see its messages except when troubleshooting or debugging your code. This is the approach I took with the current configuration, by opting to set custom logger definitions for boto and friends, so that the root logger will not by default display boto’s native log level messages.

Let’s take a look at the default logging configuration file I’ve put together, logger_config.yaml:

---
version: 1
disable_existing_loggers: False
formatters:
  simple:
    format: "%(asctime)s %(levelname)s %(module)s %(message)s"
  fancy:
    format: "%(asctime)s|%(levelname)s|%(module)s.%(funcName)s:%(lineno)-2s|%(message)s"
  debug:
    format: "%(asctime)s|%(levelname)s|%(pathname)s:%(funcName)s:%(lineno)-2s|%(message)s"

handlers:
  console:
    class: logging.StreamHandler
    level: DEBUG
    formatter: simple
    stream: ext://sys.stdout

  screen:
    class: logging.StreamHandler
    level: ERROR
    formatter: fancy
    stream: ext://sys.stdout

  logfile:
    class: logging.handlers.RotatingFileHandler
    level: DEBUG
    formatter: debug
    filename: "/tmp/deployVpc.log"
    maxBytes: 1000000
    backupCount: 10
    encoding: utf8

loggers:
  boto:
    level: WARNING
    handlers: [logfile, screen]
    propagate: no
  boto3:
    level: WARNING
    handlers: [logfile, screen]
    propagate: no
  botocore:
    level: WARNING
    handlers: [logfile, screen]
    propagate: no
  deployVpc:
    level: INFO
    handlers: [logfile, screen]
    propagate: no
  __main__:
    level: INFO
    handlers: [logfile, screen]
    propagate: no

root:
  level: NOTSET
  handlers: [console, logfile]

I chose to use YAML for the configuration file as it’s easier to parse, both visually and programmatically. By default, Python uses an INI file format for configuration, but both JSON and YAML are easily supported.

At the top of the file is some basic configuration information. Note the disable_existing_loggers setting. This allows us to avoid timing problems with module-level invocation of loggers. When logging per module/submodule, as those modules are imported early in your main script, they will not find the correct configuration information as it’s yet to be loaded. By setting disable_existing_loggers to False, we avoid that problem.

The remaining file consists of four sections:

  • formatters
  • handlers
  • loggers
  • root logger definition

Formatters

Formatters are used to define the log message string format. Here, I am using three different formatters:

  • simple – very simple and brief
  • fancy – more detail including timestamp for a helpful log entry
  • debug – fancy with module pathname instead of module name, useful for boto messages

By default, I leave simple for the console handler (for root logger), use fancy for the screen handler, and debug for the logfile handler.

Handlers

Handlers are used to define at what level, in what format, and exactly where a particular log message should be generated. I’ve left console in its default configuration, but added a StreamHandler and a RotatingFileHandler. Python’s logging module supports multiple types of handlers including Syslog, SMTP, HTTP, and others. Very flexible and powerful!

  • console – used by the root logger
  • screen – log ERROR level and above using fancy formatting to the screen/stdout
  • logfile – log DEBUG level messages and above using debug formatting to a file in /tmp that gets automatically rotated at 1MB and retention of 10 copies

Loggers

Loggers are referenced in your code whenever a message is generated. The configuration for a given logger is found in this section of the configuration file. In my case, I wanted a separate logger per module/function if necessary, so I’ve made entries at that level. I also include entries for boto and friends so I can adjust their default log levels so I don’t see their detailed information except when and where I want to (i.e., by logging at WARNING instead of INFO or DEBUG for normal operation). A logger entry also defines where log streams should end up. In this case, I send all streams to both my screen handler and my logfile handler.

I also don’t want custom loggers to propagate messages throughout the logging hierarchy (i.e., up to the root logger). So I’ve set propagate to “no”.

Implementing logging in code

Setup

I created a module called loggerSetup.py which is where I do the initialization for defining how logging() will be configured, via the configuration files:

#!/usr/bin/env python
"""Setup logging module for use"""

import os
import logging
import logging.config
import yaml

home = os.path.expanduser('~')
logger_config = home + "/git-repos/rcrelia/aws-mojo/boto3/loggerExample/logger_config.yaml"
logger_debug_config = home + "/git-repos/rcrelia/aws-mojo/boto3/loggerExample/logger_config_debug.yaml"

def configure(default_path=logger_config, default_level=logging.DEBUG, env_key='LOG_CFG'):
    """Setup logging configuration"""
    path = default_path
    value = os.getenv(env_key, None)
    if value:
        path = value
    if os.path.exists(path):
        with open(path, 'rt') as f:
            config = yaml.safe_load(f.read())
        logging.config.dictConfig(config)
    else:
        logging.basicConfig(level=default_level)

def configure_debug(default_path=logger_debug_config, default_level=logging.DEBUG, env_key='LOG_CFG'):
    """Setup logging configuration for debugging"""
    path = default_path
    value = os.getenv(env_key, None)
    if value:
        path = value
    if os.path.exists(path):
        with open(path, 'rt') as f:
            config = yaml.safe_load(f.read())
        logging.config.dictConfig(config)
    else:
        logging.basicConfig(level=default_level)

This module defines two functions: configure() and configure_debug(). This provides another way of running a non-default logging configuration without using the LOG_CFG environment variable (i.e., on a per-module basis). When you setup logging in your module like so:

loggerSetup.configure()
logger = logging.getLogger(__name__)

You would simply edit the first line to use .configure_debug() instead of .configure().

 Usage

Usage is straightforward, simply do the following in each module you wish to use logging(). Refer to the deployVpc.py script for the full syntax and usage around these bits of code.

Note: deployVpc.py requires use of AWS API key access that is stored in a config profile (I used one called ‘aws-mojo’, change to your own favorite profile). It will create a VPC and Internet Gateway in your AWS account. But it will also, by default, remove those objects as well. Caveat emptor…

  1. Import the logging modules and loggerSetup module
import logging, logging.config, loggerSetup
  1. Activate the logging configuration and define your logger for the module
loggerSetup.configure()
logger = logging.getLogger(__name__)

Note: By using __name__ instead of a custom logger name, you can easily re-use this setup code in any module.

  1. Add a logger command to your code using the level of your choice:
logger.info('EC2 Session object created')

That’s all there is to it. Below are some screenshots that show the handler output (screen and logfile) for both the default and debug configurations. Hopefully this will encourage you to look at using Python’s logging() framework for your own projects.

The full source for all of the logging module configuration as well as sample boto script is available over on GitHub in my aws-mojo repository.

Screenshots

Example: Default configuration – output to screen handler (should be no output except ERROR and above)

Default screen handler output

Example: Default configuration – output to logfile handler (should be messages at INFO and above for your code and at WARNING and above for boto library code messaging)

Default logfile handler output

Example: Debug configuration – output to screen handler (should be messages at INFO and above for your code and at WARNING)

Debug screen handler output

Example: Debug configuration – output to logfile handler (should be messages at DEBUG and all levels for your code and boto library code messaging)

Debug logfile handler output

References

Continue reading

Followup: Visual Studio Code

So, it’s been almost a couple of weeks now of hardcore Visual Studio Code (VSC) usage on my part. I have to say, it’s fantastic. Not a single crash after a solid two weeks of varied development (CloudFormation, Python, shell, and HCL (Terraform)) and at some points, intense Git activity with different repositories and different SCM endpoints. It’s easily 30% more performant than my Atom environment ever was.

The rock solid Git integration is the one feature I appreciate the most. It really works well with everything I do on a regular basis. I did install the GitEasy package just to see if it added anything beyond the built-in support. So far, I only use one GitEasy command reliably, and that’s GitEasy:PushCurrentBranchToOrigin.

I’ve also been able to increase my normal productivity after I installed a marketplace extension called “macros“. I use macros to automate combinations of git commands I often chain together manually (as well as any other keybindings I see fit to construct).

Nice job, Microsoft. I think that’s the first time I’ve said those words after nearly 30 years in technology.

 

Did not see this coming…

I read a blog post the other day about Visual Studio Code vs. Atom. I was surprised to hear so much positivity about Code, but also confess that may be my experience-based choice of deafness to anything extolling the virtues of anything by Microsoft. And before you bust my chops on that, note the “experience-based” and accept that I may have a valid stance after over 25 years as a technology professional…. but, I digress.

I’ve been using Atom exclusively for about 18 months now, hours upon hours, day after day. I absolutely, and obsessively, love this editor. I have it tweaked and configured perfectly for my workflow and coding style.

I run 99% of my git commands within Atom via the git-plus package, and manage my repos with the Project Manager package. I have both vi/vim and ex capable command shortcuts and keystrokes, all of which reflect decades of motor memory and are very important for my productivity.  In fact, I’d say they are critical to it. Not long ago, my particular configuration hit a regression bug in the deprecated built-in vim support for Atom and it brought my productivity down to more of a bad limp in terms of cadence until I was able to migrate to the vim-mode-plus community package as a replacement; I had avoided doing so because that package, until recently, did not integrate with the ex-mode command package I relied on as well.  That’s all resolved now, but it sure did create a disturbance in the Force for a bit.

I use lots of other packages as well for linting different languages including Python, Ruby, JSON, CloudFormation, Ansible, and Terraform. I appreciate the easy-on-the-eyes color themes I’ve found, my current combo is Atom Dark for the UI and Gruvbox Plus for Syntax. Atom is just freaking great!

But, hey, I’m all for trying new things, just to say I’ve tried them. Especially when I see a lot of other folks buzzing about something…

Wow.

Visual Studio Code is blowing me away.

I installed the latest Mac version and have been running it for 24 hours now, side by side with Atom. The interface is nearly identical to Atom. The command keystrokes and palette can be made the same by installing Atom keymap support. There are packages in the “Marketplace“, for free, that give me all the extras I rely on with my configured Atom environment. And, on top of all that, it’s faster and uses fewer system resources. It also has the feel of a true IDE and not just a fancy editor, with built-in debugging facilities, built-in git support, etc.

Now, I’m not about to jump ship completely from Atom. It’s been too good to me for that. But, I’m giving Visual Studio Code a solid trial run. I want to find its shortcomings and compare those with Atom.  And then I’ll make a tough decision.

Kudos to you, Microsoft. This may be the best product you’ve ever made.

 

Stupid Boto3 Tricks – get_aws_region()

For some use cases, it’s not feasible to rely on an EC2 instance having any boto or AWS configuration information available (e.g., you are using an instance profile/role instead of API keys). This is a problem when it comes to establishing client sessions with services and you need to set the default region as an attribute to the boto3.setup_default_session() module.

Here’s one way to solve this problem via pulling the availability-zone element out of EC2 instance metadata, and then filtering that to drop the AZ portion (e.g., us-east-1b -> us-east-1).

First, import the urllib2 module into your code (Python 2.x):

import urllib2

Then, create a function like so that returns the AWS region name to the calling program:

def get_aws_region():

    # still no equivalent of boto.utils in boto3, so I have to do this janky thing...
    myAz = urllib2.urlopen('http://169.254.169.254/latest/meta-data/placement/availability-zone').read()
    myRegion = myAz[:-1]
    return myRegion

A more helpful git log

The git log command is useful in viewing history of changed repository content, but the default output leaves a lot to be desired:

gitlog1

An easy enhancement to the default is to add the “–oneline” parameter which makes it easier to see commit history in a linear fashion:

gitlog2

The colors here are part of my .gitconfig settings and are helpful for parsing commit SHA’s from commit log messages. But, we can do better than this…

Try adding this git “hist” alias to your own .gitconfig file to produce an even more helpful git log output:

[alias]
 fa = fetch --all
 far = fetch --all --recurse-submodules 
 hist = log --pretty=format:'%Cred%h%Creset - %s %Cgreen(%cr) %C(bold blue)<%an>%Creset %C(yellow)%d%Creset' --abbrev-commit

Now, running “git hist” will produce this more easily parseable version of git log output, one that can be quite useful in finding exact commits by relative date:

gitlog3

Much better, don’t you think?

 

cfn-flip – CloudFormation format flipper

In a previous post, I talked about how CloudFormation now supports YAML for templates. The fine folks at AWS Labs have since released a Python package, cfn-flip, that you can install and use from a shell to convert a CF template from one format to the other: if you feed it JSON, it converts to YAML, and vice-versa.  It also works when used as a Python library.

Installing and using cfn-flip is this easy:

[rcrelia@seamus ~]$ pip install cfn-flip
Collecting cfn-flip
 Downloading cfn_flip-0.2.1.tar.gz
Requirement already satisfied: PyYAML in /usr/local/lib/python2.7/site-packages (from cfn-flip)
Requirement already satisfied: six in /usr/local/lib/python2.7/site-packages (from cfn-flip)
Building wheels for collected packages: cfn-flip
 Running setup.py bdist_wheel for cfn-flip ... done
 Stored in directory: /Users/rcrelia/Library/Caches/pip/wheels/1b/dd/d0/184e11860f8712a4a574980e129bd7cce2e6720b1c4386d633
Successfully built cfn-flip
Installing collected packages: cfn-flip
Successfully installed cfn-flip-0.2.1

[rcrelia@seamus ~]$ cat /tmp/foo.json | cfn-flip > /tmp/foo.yaml