cfn-flip – CloudFormation format flipper

In a previous post, I talked about how CloudFormation now supports YAML for templates. The fine folks at AWS Labs have since released a Python package, cfn-flip, that you can install and use from a shell to convert a CF template from one format to the other: if you feed it JSON, it converts to YAML, and vice-versa.  It also works when used as a Python library.

Installing and using cfn-flip is this easy:

[rcrelia@seamus ~]$ pip install cfn-flip
Collecting cfn-flip
 Downloading cfn_flip-0.2.1.tar.gz
Requirement already satisfied: PyYAML in /usr/local/lib/python2.7/site-packages (from cfn-flip)
Requirement already satisfied: six in /usr/local/lib/python2.7/site-packages (from cfn-flip)
Building wheels for collected packages: cfn-flip
 Running setup.py bdist_wheel for cfn-flip ... done
 Stored in directory: /Users/rcrelia/Library/Caches/pip/wheels/1b/dd/d0/184e11860f8712a4a574980e129bd7cce2e6720b1c4386d633
Successfully built cfn-flip
Installing collected packages: cfn-flip
Successfully installed cfn-flip-0.2.1

[rcrelia@seamus ~]$ cat /tmp/foo.json | cfn-flip > /tmp/foo.yaml

 

Gitlab Repo Best Practices

I recently had to come up with some guidelines for others to use when it comes to using shared Gitlab repositories in a CI/CD configuration. Here is my take based on my experiences so far, if you have any more to share please drop me a line/comment here.

Note: Gitlab uses the term Merge Request for what is commonly referred to in other CI frameworks as Pull Requests… just a little FYI 🙂

Gitlab Repo Usage – Best Practices and Tips

  • Create MR’s when you are at a point where you want/need to see your changes in action (i.e., merged into master, tested, and deployed).
  • If you will be making more related changes later in the branch, do not opt to have the source branch removed from the repository when submitting your MR.
  • At a minimum, you should merge at least once per day, especially if others are working on the same codebase at the same time. This makes it easier to resolve merge conflicts, which occur when two developers change the same repository content/object in their own respective branches and one merges ahead of the other.
  • Merge conflicts happen. Don’t worry if you experience one. Try to troubleshoot on your own, but if you cannot resolve it by yourself, pull in the other developer(s) whose changes are affecting your merge attempt and work together to resolve them.
  • When creating a MR, indicate in the Title whether or not it is time-sensitive by adding ” – ASAP” to the end of the Title text. This helps reviewers prioritize their review requests with minimal disruption.
  • Do NOT approve your own MR if it involves a code change. The peer-review component of Merge Requests is an opportunity to communicate and share awareness of changes on the team. That said, here are some scenarios where it is ok to approve your own MR’s:
    • you are pushing non-operational changes (e.g., comments, documentation)
    • you are the only developer available and it’s an important change or if waiting for MR review blocks progress signficantly (use good judgment)
  • When adding to a branch, keep your commits as specific as possible when modifying code. Each commit should be understandable on its own, even if there are other commits in the branch
  • Not all MR’s need to hit a pipeline. Depending on the repo pipeline configuration, some branch name filters may exist to insure a certain type of branch gets tested while other types do not. This is especially true of non-code changes (e.g., updating a README)
  • When starting new development as opposed to modifying existing code, it may make sense to create a personal repo or to use a fork of a shared repo to do a lot of iteration quickly without having to do a formal MR process in a shared repo. Once you’ve got some code ready for sharing, you can migrate it manually (copy) into the shared repo and work off MR’s going forward. Not required at all, but it can allow for more rapid iteration especially on small teams.

Git Smart with CodeCommit!

AWS recently announced that CodeCommit repositories can now be created via CloudFormation, which spurred me finally to take the opportunity to create my own home lab git repo. While I do have public GitHub repos, I have wanted a private repo for my experimental coding and other bits that aren’t ready or destined for public release. I could build my own VM at home to host a git repo (I recently tinkered with GitLab Community Edition), but then I have to worry about backups, accessibility from remote locations, etc.  As it turns out, you can build and use a CodeCommit repo for free in your AWS account, which made it even more compelling. So, I decided to give CodeCommit a try.

CodeCommit is a fully managed Git-based source control hosting service in AWS. Being fully managed, you can focus on using the repo rather than installing one, then maintaining, securing, backing it up, etc. And, it’s accessible from anywhere just like your other AWS services. The first 5 active users are free, which includes unlimited repo creation, 50 GB of storage, and  10,000 Git requests per month. Other benefits include integration paths with CodeDeploy and CodePipeline for a full CD/CI configuration. For a developer looking for a quick and easy way to manage non-public code, AWS offers a very attractive proposition to build your Git repo in CodeCommit.

QuickStart: Deploying Your Own CodeCommit Repo

  1. Download my CodeCommit CloudFormation template (json|yaml) and use to create your new repo.
  2. Add your SSH public key to your IAM user account and configure your SSH config to add a CodeCommit profile.
  3. Clone your new repo down to your workstation/laptop (be sure to use the correct AWS::Region and repository name):
    git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/yournewrepo

DeeperDive: Deploying Your Own CodeCommit Repo

Step 1: Building the CodeCommit Repository

I’ve created a CloudFormation template that creates a stack for deploying a CodeCommit repository. There are two versions, one in JSON and one in YAML, which is now supported for CF templating. Take your pick and deploy using either the console or via the AWS CLI.

You need to specify four stack parameters:

  • Environment (not used, but could be used in Ref’s for tagging)
  • RepoName (100-character string limit)
  • RepoDescription (1000-character string limit)
  • Email (for SNS notifications on repo events)

Here are the awscli commands required with sample parameters:

# modify the template if needed for your account particulars then validate:
$ aws cloudformation validate-template --template-body file:///path/to/template/aws-deploy-codecommit-repo.yml

$ aws cloudformation create-stack --stack-name CodeCommitRepo --template-body file:///path/to/template/aws-deploy-codecommit-repo.yml  --parameters ParameterKey=Environment,ParameterValue=Dev ParameterKey=RepoName,ParameterValue=myrepo ParameterKey=RepoDescription,ParameterValue='My code' ParameterKey=Email,ParameterValue=youremail@someplace.com

In a few minutes, you should have a brand new CloudFormation stack along with your own CodeCommit repository. You will receive a SNS notification email if you use my stock template, so be sure to confirm your topic subscription to receive updates when the repository event trigger runs (e.g., after commits to the master branch).

Step 2: Configure Your IAM Account With a SSH Key

Assuming that you, like myself, prefer to use SSH for git transactions, you will need to add your public SSH key to your IAM user in your AWS account. This is pretty straightforward and the steps are spelled out in the CodeCommit documentation.

Step 3: Clone Your New Repo

Once you’ve configured your SSH key in your IAM account profile, you can verify CodeCommit access like so:

ssh git-codecommit.us-east-1.amazonaws.com

Once you are able to talk to CodeCommit via git over SSH, you should be able to clone down your new repo:

git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/yournewrepo

You will want to specify a repo-specific git config if you don’t use the global settings for your other repos:

git config user.name "Your Name"
git config user.email youremail@someplace.com

Now you are ready to add files to your new CodeCommit repository. Wasn’t that simple?

Agile Transformations and First Principles

When IT organizations start down the path of using new methodologies like DevOps, Continuous Delivery/Integration, and Agile workflows, there is a bias to approach things the way you always have while trying to be open-minded and searching for opportunities to leverage the new to eclipse the problems of the old. It is rare for an organization, particularly one that has some time and history behind its practices, to fully embrace new approaches and go all-in from the start.

“It takes too much time away from our established process…”

“We can’t abandon existing infrastructure…”

“This money-making application over here, OldFancyThing, doesn’t fit a continuous delivery process, so we have to keep doing what we’ve always done and to do both is too hard…”

“But we’ve always done it THIS WAY…”

These are likely very familiar responses you’ve heard, especially recently. They’re also examples of reasoning from analogy. There are a lot of opportunities that are missed as a result of this sort of thinking in organizations.

How do you escape such tethered thinking? By insisting on reasoning from first principles.

Reasoning from first principles involves looking at problems from the ground up each time, distilling the actionable targets down to their most basic components. It takes more effort than reasoning from analogy and most people don’t want to spend the time or energy deviating from a working path. Also, the time and energy may not yield a new target or end state that is of more value than existing approaches.

Innovation requires both iteration and bifurcation, with the latter being the territory of reasoning from first principles. We’re familiar with iterative change; in fact, Agile methods essentially demand an iterative workflow but also require breaking down bifurcation into quick cycles of work. Legacy or waterfall project workflows essentially demand bifurcation but at the cost of reducing the impact of the change due to elongated development cycles, thereby reducing innovation. How many of us have been involved in a two-year project with waterfall structure only to have it be canceled around 18 months because the once-valued innovation was no longer that innovative?

For this reason alone, adopting Agile methodologies for your software development and infrastructure engineering practices puts you into a better posture for increasing productivity and delivering innovative value. Continuous delivery & continuous integration workflows are the moving parts of the Agile engine. Sprint stories and sub-tasks are the fuel of the engine, and the engine’s output is iteration and bifurcation over short spans of time. In a sense, you are almost guaranteed to succeed and grow, with minimal risk-taking. What are you waiting for?