Managing pre-commit hooks in Git

Git comes with support for action sequences based on repository activity. Pushes, pulls, merges, and commits can all be configured to trigger specific custom actions responsively. Often, the custom actions are geared towards promoting intra-team communication, process-gating like enforcing commit log standards and best practices for code content/syntax. Of course, CI workflows already are a popular frontline of defense for bad code pushes by using linters and syntax checks as part of an initial test stage in a pipeline. However, those linting processes have a cost in terms of compute resources they utilize. In a large development environment, this can translate into real money costs pretty quickly. Why spend compute cycles on pipeline jobs that can be just as easily run on a developer’s workstation or laptop? So, let’s take a closer look at git hooks…

Git hooks are configured in a given repo via files located in /.git/hooks. A new repository will be automagically populated with these handy tools, which are inactive by default thanks to the file suffix “.sample”:

[rcrelia@fuji hooks (GIT_DIR!)]$ ls -latr
total 40
-rwxr-xr-x 1 rcrelia staff 3611 Dec 22 2014 update.sample
-rwxr-xr-x 1 rcrelia staff 1239 Dec 22 2014 prepare-commit-msg.sample
-rwxr-xr-x 1 rcrelia staff 4951 Dec 22 2014 pre-rebase.sample
-rwxr-xr-x 1 rcrelia staff 1356 Dec 22 2014 pre-push.sample
-rwxr-xr-x 1 rcrelia staff 1642 Dec 22 2014 pre-commit.sample
-rwxr-xr-x 1 rcrelia staff  398 Dec 22 2014 pre-applypatch.sample
-rwxr-xr-x 1 rcrelia staff  189 Dec 22 2014 post-update.sample
-rwxr-xr-x 1 rcrelia staff  896 Dec 22 2014 commit-msg.sample
-rwxr-xr-x 1 rcrelia staff  452 Dec 22 2014 applypatch-msg.sample
drwxr-xr-x 11 rcrelia staff 374 Dec 22 2014 .
drwxr-xr-x 15 rcrelia staff 510 Jan 4 2015 ..

Having individual hooks like these provides a powerful framework for customizing your repository usage to your specific needs and workflows. Each one is triggered at the stage of git workflow described by the filename (pre-commit, post-update, etc.)

Tools like linters fit nicely into the pre-commit action sequence. By configuring the pre-commit hook with a linter, you are delivering higher quality code to your pipelines which makes for a more efficient use of your compute resource budget.

Hook Management: Yelp’s pre-commit

I recently started using an open source utility released by Yelp’s engineers called simply, “pre-commit”. Essentially, it is a framework for managing the pre-commit hook in a git repository, using a single configuration file with multiple action sequences. This allows for a single pre-commit hook to do many different sorts of actions. It includes some basic linter capabilities as well as other code quality control features, but also is integrated with other projects (e.g., ansible-lint has a pre-commit hook available).

Setup is straightforward, as is the usage. Here’s how I did it:

pip install pre-commit
cd repodir
pre-commit install
# edit a new file called .pre-commit-config.yaml in the root of your repo
git add .pre-commit-config.yaml
git commit -m "turn on pre-commit hook"

Immediately you should see the pre-commit utility do its thing when you commit this or any other change to your repository.

Here is the current working version of my pre-commit config file (one per repo):

- repo: git://github.com/pre-commit/pre-commit-hooks
  sha: v0.7.1
  hooks:
   - id: trailing-whitespace
     files: \.(js|rb|md|py|sh|txt|yaml|yml)$
   - id: check-json
     files: \.(json|template)$
   - id: check-yaml
     files: \.(yml|yaml)$
   - id: detect-private-key
   - id: detect-aws-credentials
- repo: git://github.com/detailyang/pre-commit-shell
  sha: 6f03a87e054d25f8a229cef9005f39dd053a9fcb
  hooks:
   - id: shell-lint

So, I’m using some of pre-commit’s built-in handlers for whitespace cleanup, JSON linting, YAML linting, and checking to make sure I don’t include any private keys or AWS credentials in my commits. Also, I’ve integrated a third-party tool, pre-commit-shell, that is a wrapper to shellcheck for syntax checking and enforcing best practices in any shell scripts I might add to the repo.

And here is an example of a code commit that triggers pre-commit’s operation:

[rcrelia@fuji aws-mojo (master +=)]$ git commit -m "pre-commit"
[INFO] Installing environment for git://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
Trim Trailing Whitespace.................................................Passed
Check JSON...........................................(no files to check)Skipped
Check Yaml...............................................................Passed
Detect Private Key.......................................................Passed
Detect AWS Credentials...................................................Passed
Shell Syntax Check...................................(no files to check)Skipped
[master 7d837e7] pre-commit
 1 file changed, 15 insertions(+)
 create mode 100644 .pre-commit-config.yaml

While pre-commit doesn’t handle management of the other available Git hooks, it does a very good job with what it does control, with a robust plugin interface and the ability to write custom hooks.

If you find yourself in need of some automated linting of your code before you push to your remote repositories, I highly recommend the use of pre-commit for its ease of use and operational flexibility.

Happy coding!

Gitlab Repo Best Practices

I recently had to come up with some guidelines for others to use when it comes to using shared Gitlab repositories in a CI/CD configuration. Here is my take based on my experiences so far, if you have any more to share please drop me a line/comment here.

Note: Gitlab uses the term Merge Request for what is commonly referred to in other CI frameworks as Pull Requests… just a little FYI 🙂

Gitlab Repo Usage – Best Practices and Tips

  • Create MR’s when you are at a point where you want/need to see your changes in action (i.e., merged into master, tested, and deployed).
  • If you will be making more related changes later in the branch, do not opt to have the source branch removed from the repository when submitting your MR.
  • At a minimum, you should merge at least once per day, especially if others are working on the same codebase at the same time. This makes it easier to resolve merge conflicts, which occur when two developers change the same repository content/object in their own respective branches and one merges ahead of the other.
  • Merge conflicts happen. Don’t worry if you experience one. Try to troubleshoot on your own, but if you cannot resolve it by yourself, pull in the other developer(s) whose changes are affecting your merge attempt and work together to resolve them.
  • When creating a MR, indicate in the Title whether or not it is time-sensitive by adding ” – ASAP” to the end of the Title text. This helps reviewers prioritize their review requests with minimal disruption.
  • Do NOT approve your own MR if it involves a code change. The peer-review component of Merge Requests is an opportunity to communicate and share awareness of changes on the team. That said, here are some scenarios where it is ok to approve your own MR’s:
    • you are pushing non-operational changes (e.g., comments, documentation)
    • you are the only developer available and it’s an important change or if waiting for MR review blocks progress signficantly (use good judgment)
  • When adding to a branch, keep your commits as specific as possible when modifying code. Each commit should be understandable on its own, even if there are other commits in the branch
  • Not all MR’s need to hit a pipeline. Depending on the repo pipeline configuration, some branch name filters may exist to insure a certain type of branch gets tested while other types do not. This is especially true of non-code changes (e.g., updating a README)
  • When starting new development as opposed to modifying existing code, it may make sense to create a personal repo or to use a fork of a shared repo to do a lot of iteration quickly without having to do a formal MR process in a shared repo. Once you’ve got some code ready for sharing, you can migrate it manually (copy) into the shared repo and work off MR’s going forward. Not required at all, but it can allow for more rapid iteration especially on small teams.

S3crets

I recently read through Chris Craig’s AWS Security Blog post about limiting S3 bucket access based on specific IAM credentials/roles. There are two parts especially worth mentioning that can be an effective solution for many needs (e.g., distributing secret key values programmatically).

Explicit Deny in an S3 Bucket Policy

First, you construct a specific S3 bucket policy that can be used for controlling access via IAM user id’s (IAM user, IAM instance role & instance profile) as well as the AWS root account. In the policy below, note the explicit Deny statement at the end, which is how you lock down access except for those IAM entities. Make sure you include yourself or root (and you have root access) otherwise you will lock yourself out of the bucket you just created. Best to work with a temp IAM user for testing, fyi.

{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::012345678901:role/my-role"
     },
     "Action": "s3:ListBucket",
     "Resource": "arn:aws:s3:::my-bucket"
   },
   {
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::012345678901:role/my-role"
      },
     "Action": "s3:GetObject",
     "Resource": "arn:aws:s3:::my-bucket/*"
   },
 {
     "Effect": "Deny",
     "Principal": "*",
     "Action": "s3:*",
     "Resource": [
        "arn:aws:s3:::my-bucket",
        "arn:aws:s3:::my-bucket/*"
      ],
     "Condition": {
        "StringNotLike": {
           "aws:userId": [
              "AIDAIDEADBEEF01234567",
              "AROAJABCD1234EF560123:*",
              "AIPAIBEA2510CD3498765:*",
              "012345678901"
            ]
         }
      }
    }
  ]
}

Note: IAM objects in the Deny statement condition have tell-tale userId patterns as follows:

  • AIDAIDEADBEEF01234567″ – IAM user
  • AROAJABCD1234EF560123″ – IAM role (instance role in this case)
  • AIPAIBEA2510CD3498765″ – IAM instance profile
  • “012345678901” – AWS account number, or root

This policy essentially prohibits all access to “my-bucket” and associated keys, except for those IAM objects listed in the Conditional to the Deny * statement.

Explicit Allow in an IAM Policy

To make sure that the IAM entities you are not Denying access to the S3 bucket in question, you must craft a specific IAM policy. Then attach the policy to the IAM object(s) that require access. This policy is straightforward and is the second piece of this solution:

{
 "Version": "2012-10-17",
 "Statement": [
    {
      "Effect": "Allow",
      "Action": [
         "s3:ListAllMyBuckets",
         "s3:GetBucketLocation"
       ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::my-bucket"
    },
    {
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::my-bucket/*"
    }
  ]
}

These two policies together form a powerful mechanism for creating a simple distribution point for secrets that you wish to use but not store locally in code or on an instance. A potential variation might include integration with KMS  to provide at-rest encryption as well as programmatic decryption/encryption of your secrets as you move them in and out of S3.

Aside

Good reads: the Gruntwork blog

I’ve been enjoying Gruntwork’s blog, especially the posts by Yevgeniy Brikman. Gruntwork is a Terraform shop, but Yevgeniy’s posts are chock full of good ideas and practices around devops in general. Check it out!

Security as op-ex savings

The journey to the cloud is compelling enough as is, what with its foundation of IaaS components and automation capabilities across all layers of your computing environment. It promotes the use of CI/CD methodologies and best-practices for configuration management. These characteristics impart a considerable savings in terms of operational costs (op-ex) that can run amok in premise deployments of infrastructure.

Still, in my experience, one of the most powerful arguments for using cloud services, and AWS in particular, is the value added by a hosting architecture that is secure by design. How many times have you seen a well-architected application or infrastructure suffer either functional or performance-related problems due to poor security design elements? In addition, the capital outlay (cap-ex) required for premise infrastructure is non-trivial if you want the same breadth and depth of security controls and auditability for compliance that AWS provides customers, again by design. This facet of AWS alone can substantially mitigate op-ex costs associated with running services in the cloud, which can vary dramatically depending on how you solve problems with your infrastructure.

AWS provides substantial documentation on cloud security, and one of the best places to start (or revisit) is the periodic publication “AWS Security Best Practices“, the current version of which can be found in the Developer Documents section of the main AWS cloud security resource collection. If you haven’t read this document yet or lately, below I have compiled some excerpts that touch on some common issues or concerns when deploying infrastructure in AWS. I highly recommend reading the Best Practices document at least a few times a year as the pace of innovation in AWS continues to grow faster each quarter.

In no particular order, here are some noteworthy best practice highlights from the August 2016 publication of the Best Practices guide that help illustrate the value and savings of using AWS viz the costs of traditional datacenter computing environments:

IP Spoofing – Amazon EC2 instances cannot send spoofed network traffic. The AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.

Distributed Denial Of Service (DDoS) Attacks – AWS API endpoints are hosted on large, Internet-scale, world class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer. Proprietary DDoS mitigation techniques are used. Additionally, AWS’s networks are multihomed across a number of providers to achieve Internet access diversity

Packet sniffing by other tenants – It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC.

Secure Access Points – AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic. These customer access points are called API endpoints, and they allow secure HTTP access (HTTPS), which allows you to establish a secure communication session with your storage or compute instances within AWS. To support customers with FIPS cryptographic requirements, the SSL-terminating load balancers in AWS GovCloud (US) are FIPS 140-2-compliant. 

Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) protocol. ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy, which uses session keys that are ephemeral and not stored anywhere. This helps prevent the decoding of captured data by unauthorized third parties, even if the secret long-term key itself is compromised.

Instance Isolation – Different instances running on the same physical machine are isolated from each other via the Xen hypervisor. Amazon is active in the Xen community, which provides awareness of the latest developments. In addition, the AWS firewall resides within the hypervisor layer, between the physical network interface and the instance’s virtual interface. All packets must pass through this layer, thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using similar mechanisms. Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customer, so that one customer’s data is never unintentionally exposed to another. In addition, memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest. The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete.

Firewall (Security Groups) – Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default deny-all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic. The traffic may be restricted by protocol, by service port, as well as by source IP address (individual IP or Classless Inter-Domain Routing (CIDR) block).

The firewall isn’t controlled through the guest OS; rather it requires your X.509 certificate and key to authorize changes, thus adding an extra layer of security. AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall, therefore enabling you to implement additional security through separation of duties. The level of security afforded by the firewall is a function of which ports you open, and for what duration and purpose. The default state is to deny all incoming traffic, and you should plan carefully what you will open when building and securing your applications. Well-informed traffic management and security design are still required on a per instance basis. AWS further encourages you to apply additional per-instance filters with host-based firewalls such as IPtables or the Windows Firewall and VPNs. This can restrict both inbound and outbound traffic.

Storage Device Decommissioning – When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals. AWS uses the techniques detailed in DoD 5220.22-M (“National Industrial Security Program Operating Manual “) or NIST 800-88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

Multi-factor Authentication – You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM. In addition, you add MFA protection for access across AWS Accounts, for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account. You can require the user to use MFA before assuming the role as an additional layer of security. 

You can also enforce MFA authentication for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3. You do this by adding an MFA-authentication requirement to an IAM access policy. You can attach these access policies to IAM users, IAM groups, or resources that support Access Control Lists (ACLs) like Amazon S3 buckets, SQS queues, and SNS topics.

AWS Trusted Advisor Security Checks – The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency, but also cloud security. Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS Account.

Amazon Virtual Private Cloud (Amazon VPC) Security – Normally, each Amazon EC2 instance you launch is randomly assigned a public IP address in the Amazon EC2 address space. Amazon VPC enables you to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses in the range of your choice (e.g., 10.0.0.0/16). You can define subnets within your VPC, grouping similar kinds of instances based on IP address range, and then set up routing and security to control the flow of traffic in and out of the instances and subnets. AWS offers a variety of VPC architecture templates with configurations that provide varying levels of public access:

  • VPC with a single public subnet only. Your instances run in a private, isolated section of the AWS cloud with direct access to the Internet. Network ACLs and security groups can be used to provide strict control over inbound and outbound network traffic to your instances.
  • VPC with public and private subnets. In addition to containing a public subnet, this configuration adds a private subnet whose instances are not addressable from the Internet. Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation (NAT).
  • VPC with public and private subnets and hardware VPN access. This configuration adds an IPsec VPN connection between your Amazon VPC and your data center, effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC. In this configuration, customers add a VPN appliance on their corporate datacenter side.
  • VPC with private subnet only and hardware VPN access. Your instances run in a private, isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet. You can connect this private subnet to your corporate data center via an IPsec VPN tunnel.

Security features within Amazon VPC include security groups, network ACLs, routing tables, and external gateways. Each of these items is complementary to providing a secure, isolated network that can be extended through selective enabling of direct Internet access or private connectivity to another network.

AWS Identity and Access Management (AWS IAM) – AWS IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account. A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Services. AWS IAM eliminates the need to share passwords or keys, and makes it easy to enable or disable a user’s access as appropriate. AWS IAM enables you to implement security best practices, such as least privilege, by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs. AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted.

AWS CloudTrail Security – AWS CloudTrail provides a log of all requests for AWS resources within your account. For each event recorded, you can see what service was accessed, what action was performed, any parameters for the action, and who made the request. Not only can you see which one of your users or services performed an action on an AWS service, but you can see whether it was as the AWS root account user or an IAM user, or whether it was with temporary security credentials for a role or federated user. CloudTrail basically captures information about every API call to an AWS resource, whether that call was made from the AWS Management Console, CLI, or an SDK. If the API request returned an error, CloudTrail provides the description of the error, including messages for authorization failures. It even captures AWS Management Console sign-in events, creating a log record every time an AWS account owner, a federated user, or an IAM user simply signs into the console.

The Security Best Practices document contains many more descriptions and illustrations of AWS’s secure-by-design environment and services. Take some time over the holidays and review the document, with an eye towards op-ex savings in the coming new year. Security is not a luxury, and it definitely shouldn’t cost like one.

Safe travels on your journey to/in the cloud!

 

 

cfn_nag – a security linter for CloudFormation

It’s a little too easy to make non-secure configurations of resources in CloudFormation when you are focused on getting the entire stack to render correctly. By the time you are done building and testing a template, you must take extra time to revisit all your resources to make sure you are following good security and IaaS practices.

Enter cfn_nag, a handy little Ruby gem created by Stelligent that can help identify problems in your CloudFormation templates before you publish them. According to the README for the repo on Github, Stelligent says this about cfn_nag:

The cfn-nag tool looks for patterns in CloudFormation templates that may indicate insecure infrastructure. Roughly speaking it will look for:

  • IAM rules that are too permissive (wildcards)
  • Security group rules that are too permissive (wildcards)
  • Access logs that aren’t enabled
  • Encryption that isn’t enabled

Under the covers, cfn_nag is using jq to parse the JSON input files you provide to it for inspection. In my case, I simply installed jq first using homebrew:

[rcrelia@fuji vpc-scenario-2-reference (master=)]$ brew install jq
==> Installing dependencies for jq: oniguruma
==> Installing jq dependency: oniguruma
==> Downloading https://homebrew.bintray.com/bottles/oniguruma-6.0.0.yosemite.bottle.tar.gz
######################################################################## 100.0%
==> Pouring oniguruma-6.0.0.yosemite.bottle.tar.gz
🍺 /usr/local/Cellar/oniguruma/6.0.0: 16 files, 1.3M
==> Installing jq
==> Downloading https://homebrew.bintray.com/bottles/jq-1.5_1.yosemite.bottle.tar.gz
######################################################################## 100.0%
==> Pouring jq-1.5_1.yosemite.bottle.tar.gz
🍺 /usr/local/Cellar/jq/1.5_1: 18 files, 958.5K

Once I had jq, I installed cfn_nag:

[rcrelia@fuji vpc-scenario-2-reference (master=)]$ gem install cfn-nag
Fetching: trollop-2.1.2.gem (100%)
Successfully installed trollop-2.1.2
Fetching: multi_json-1.12.1.gem (100%)
Successfully installed multi_json-1.12.1
Fetching: little-plugger-1.1.4.gem (100%)
Successfully installed little-plugger-1.1.4
Fetching: logging-2.0.0.gem (100%)
Successfully installed logging-2.0.0
Fetching: cfn-nag-0.0.19.gem (100%)
Successfully installed cfn-nag-0.0.19
Parsing documentation for trollop-2.1.2
Installing ri documentation for trollop-2.1.2
Parsing documentation for multi_json-1.12.1
Installing ri documentation for multi_json-1.12.1
Parsing documentation for little-plugger-1.1.4
Installing ri documentation for little-plugger-1.1.4
Parsing documentation for logging-2.0.0
Installing ri documentation for logging-2.0.0
Parsing documentation for cfn-nag-0.0.19
Installing ri documentation for cfn-nag-0.0.19
Done installing documentation for trollop, multi_json, little-plugger, logging, cfn-nag after 1 seconds
5 gems installed

At this point, I had a working version of cfn_nag and immediately checked some recent templates. Here is output from running against one of my aws-mojo “Scenario 2” templates I recently posted about:

[rcrelia@fuji vpc-scenario-2-reference (master=)]$ cfn_nag --input-json-path ./aws-vpc-instance-securitygroups.json
------------------------------------------------------------
./aws-vpc-instance-securitygroups.json
------------------------------------------------------------------------------------------------------------------------
| WARN
|
| Resources: ["PubInstSGIngressHttp", "PubInstSGIngressHttps"]
|
| Security Group Standalone Ingress found with cidr open to world. This should never be true on instance. Permissible on ELB
------------------------------------------------------------
| WARN
|
| Resources: ["PrivInstSGEgressGlobalHttp", "PrivInstSGEgressGlobalHttps", "PubInstSGEgressGlobalHttp", "PubInstSGEgressGlobalHttps"]
|
| Security Group Standalone Egress found with cidr open to world.

Failures count: 0
Warnings count: 6

Pretty neat! In this case, these warnings are anticipated due to how I designed the VPC security groups to make use of network routing through NAT instances as well as the public NAT instances themselves being able to receive traffic globally in the public zones.

Obviously, you may want to consider adding your own cfn_nag rules to the stock set it ships with, to reflect your own specific security and configuration concerns.

To see a list of all the rules that come pre-configured in cfn_nag, simply run cfn_nag_rules:

[rcrelia@fuji vpc-scenario-2-reference (master=)]$ cfn_nag_rules

WARNING VIOLATIONS:
CloudFront Distribution should enable access logging
Elastic Load Balancer should have access logging configured
Elastic Load Balancer should have access logging enabled
IAM managed policy should not allow * resource
IAM managed policy should not allow Allow+NotAction
IAM managed policy should not allow Allow+NotResource
IAM policy should not allow * resource
IAM policy should not allow Allow+NotAction
IAM policy should not allow Allow+NotResource
IAM role should not allow * resource on its permissions policy
IAM role should not allow Allow+NotAction
IAM role should not allow Allow+NotAction on trust permissinos
IAM role should not allow Allow+NotResource
Lambda permission beside InvokeFunction might not be what you want? Not sure!?
S3 Bucket likely should not have a public read acl
S3 Bucket policy should not allow Allow+NotAction
SNS Topic policy should not allow Allow+NotAction
SQS Queue policy should not allow Allow+NotAction
Security Group Standalone Egress found with cidr open to world.
Security Group Standalone Ingress cidr found that is not /32
Security Group Standalone Ingress found with cidr open to world. This should never be true on instance. Permissible on ELB
Security Group egress with port range instead of just a single port
Security Group ingress with port range instead of just a single port
Security Groups found egress with port range instead of just a single port
Security Groups found ingress with port range instead of just a single port
Security Groups found with cidr open to world on egress
Security Groups found with cidr open to world on egress array
Security Groups found with cidr open to world on ingress array. This should never be true on instance. Permissible on ELB
Security Groups found with cidr open to world on ingress. This should never be true on instance. Permissible on ELB
Security Groups found with cidr that is not /32
Specifying credentials in the template itself is probably not the safest thing

FAILING VIOLATIONS:
A Cloudformation template must have at least 1 resource
AWS::EC2::SecurityGroup must have Properties
AWS::EC2::SecurityGroupEgress must have Properties
AWS::EC2::SecurityGroupEgress must not have GroupName - EC2 classic is a no-go!
AWS::EC2::SecurityGroupIngress must have Properties
AWS::EC2::SecurityGroupIngress must not have GroupName - EC2 classic is a no-go!
AWS::IAM::ManagedPolicy must have Properties
...snip...

There are two classes of notifications, warning violations and failing violations. There is good guidance in each set, but again, you may find that you want to edit/add your own rules to increase the value of cfn_nag for your infrastructure.

AWS Diagrams with draw.io

Recently, I have been using the online diagramming tool draw.io for the AWS architecture diagrams I generate. It’s got an intuitive interface, allows for local saving of images (PDF, PNG formats), and is free to use. Most AWS services are represented in their diagram palette. draw.io supports diagram storage on Dropbox and Google Drive as well. You can create non-AWS diagrams with draw.io, too. For more details, check  out their online manual. Here’s a sample diagram I made using draw.io that is part of a recent post:

vpc-reference-nat-instances