If you’re using AWS CloudFormation you might have come across clouds-aws. If not you might want to have a look at it on GitHub.

It makes managing CloudFormation stacks from the shell fairly easy. It keeps a local representation of the stack as a Template (JSON or YAML) and a YAML-file holding the parameter values for the stack.

Until now an update to an existing stack had to be performed in the dark, meaning that there was no possibility to dry-run or predict a stack update before going for it without a safety net other than the built-in rollback capability.

The new feature introduced in 0.3.8 brings now change sets to the game allowing to stage a stack update and inspect the changes in detail before actually executing the change.

To get clouds simply run pip install clouds-aws. You can then dump an existing stack with clouds dump <stack name>, then edit the stack and create a new change set with clouds change create <stack name> <change name> which will print out an overview of the changes.

If you need more detail you can inspect the full detail AWS API returns for the change you can describe the change in either YAML or JSON format: clouds change describe --yaml <stack name> <change name>.

When everything is to your liking you execute the change with clouds change execute --events <stack name> <change name>. The –events in this example will poll for stack events until the stack reaches a stable state.

Adding a subscription to an SNS topic in CloudFormation is easily done with a few lines:

But that’s actually not enough. While creating a subscription in the AWS Console is a single step it implicitly creates a Lambda permission for SNS to invoke the function. This needs to be added as a separate resource in CloudFormation:

This will allow SNS to successfully invoke the function.

If you send a JSON message to your SNS topic that will be encapsulated in the event sent to Lambda (as JSON) and you will have to unpack it in your function like in this example for node.js runtime:

I’m currently two days into the Advanced Architecting on AWS class and am looking forward to taking the AWS Certified Solutions Architect – Professional Level exam later this month. Since I noticed there is quite some interest in this certification I want to use this blog post to discuss the sample exam questions you can download from AWS. If you haven’t figured them out for yourself you might want to try them first before continue reading as this post is a huge spoiler.

Continue reading

I created a small helper script that encapsulates API calls for simple snapshot management. This can be used in cron jobs to trigger snapshot creation and cleanup.

Keeping manually created snapshots is the least backup security one should have on top of automatically created snapshots as the latter type is deleted together with the instance. In case of unintentional deletion of an RDS instance automated snapshots are of no help to restore the data.

RDS-Snapshot on GitHub

Wanna know whether your ELB has at least one instance in service or not?

Just add a CloudWatch Alarm to your ELB and have it send status changes to SNS (from where you can route it to whatever notification system you’re using).

The following template will create a CloudFormation distribution with an AliasRecord pointing at it. It assumes your origin is already up and running. You don’t want to have the origin and CloudFront in the same stack anyways.

The best way to ensure some script/command running on shutdown is to use upstart which is the default init in CentOS6:

In order to be able to properly diff and read your CloudFormation templates you want them to be in a harmonised shape:

  1. Validity check and indentation
  2. Apply some regex search/replace transformations to improve human readability

This can be done on the shell: nice-cf.py < template.json > beautified.json

Or in vim using a keybinding:

This is the nice-cf.py Python script used to apply the transformations to the template:

You can SSH to a host inside a private subnet directly by combining the IPs/hostnames with a plus: ssh

The first IP is added a .aws because I want AWS IPs to trigger usage of an AWS specific config section in ~/.ssh/config that uses a proxy and doesn’t clog up my known_hosts file with SSH host keys of disposable machines.

These are the entries in my ~/.ssh/config that allow the above command to get you straight to your machine:

When you create a new MFA you can put the secret key into an encrypted container with the following command. You need to paste the private key for the MFA device that you can copy and paste when creating a new virtual MFA device in the AWS web console.

gpg --armor -e > ~/.aws/<mfa-name>.mfa.asc

The following snipped will allow you to easily access these MFA devices on the shell (make sure you have the oathtool installed). To get a code just run mfa <profile> and enter your passphrase.

The pbcopy will directly copy the 6digit number to the pastebin on Macs. On Linux you might have to tweak that a little bit to suit your needs.