If you’re using AWS CloudFormation you might have come across clouds-aws. If not you might want to have a look at it on GitHub.

It makes managing CloudFormation stacks from the shell fairly easy. It keeps a local representation of the stack as a Template (JSON or YAML) and a YAML-file holding the parameter values for the stack.

Until now an update to an existing stack had to be performed in the dark, meaning that there was no possibility to dry-run or predict a stack update before going for it without a safety net other than the built-in rollback capability.

The new feature introduced in 0.3.8 brings now change sets to the game allowing to stage a stack update and inspect the changes in detail before actually executing the change.

To get clouds simply run pip install clouds-aws. You can then dump an existing stack with clouds dump <stack name>, then edit the stack and create a new change set with clouds change create <stack name> <change name> which will print out an overview of the changes.

If you need more detail you can inspect the full detail AWS API returns for the change you can describe the change in either YAML or JSON format: clouds change describe --yaml <stack name> <change name>.

When everything is to your liking you execute the change with clouds change execute --events <stack name> <change name>. The –events in this example will poll for stack events until the stack reaches a stable state.

I’m forced to use GoCD for deployments and stumble across limitations when using it more often than not.

Last thing that made me quite a headache was automatic locking of pipelines. In general this is a good idea. When the pipeline is triggered it is locked to prevent parallel execution which is a good thing when the pipeline deploys a stack of stuff in some cloud environment and you don’t want races of competing pipeline runs.

However if the pipeline errors prematurely the it stays locked. Until somebody bothers to unlock it in the UI. This is very possible in most cases as the pipelines have several steps that might break. I already included cleanup of debris but haven’t been able to unlock it automatically. That is until I had an idea for which I will most probably go to system engineering hell. But since I found quite a few instances of people asking how it can be done on the Interwebs I thought I share it anyway.

The solution is to issue an API call to the GoCD server as last step in the failure case. Unfortunately this can’t be done directly as the pipeline is still running and therefore unable to unlock itself. The GoCD server will respond with an error stating the pipeline must not have active tasks to be unlocked. The answer is atd. It really shouldn’t be – but it works.

The first line removed any port number from the $GO_SERVER_URL which happens to have a non-default port in my case which is different from the port on the reverse proxy.

The at command schedules the API call after a minute to unlock the pipeline at a time when it is finished and the server allows unlocking again.

Disclaimer: Don’t try this at home. For educational purposes only. ;-)

Requirements

  • Python 3
  • Pip 3

$ brew install python3

Pip3 is installed with Python3

Installation

To install virtualenv via pip run:

$ pip3 install virtualenv

Usage

Creation of virtualenv:

$ virtualenv -p python3

Activate the virtualenv:

$ source /bin/activate

Deactivate the virtualenv:

$ deactivate

Adding a subscription to an SNS topic in CloudFormation is easily done with a few lines:

But that’s actually not enough. While creating a subscription in the AWS Console is a single step it implicitly creates a Lambda permission for SNS to invoke the function. This needs to be added as a separate resource in CloudFormation:

This will allow SNS to successfully invoke the function.

If you send a JSON message to your SNS topic that will be encapsulated in the event sent to Lambda (as JSON) and you will have to unpack it in your function like in this example for node.js runtime:

I just had the need to check a record on all NS servers of a zone to see whether they had been updated with the new zone config and returned the correct IP addresses for the name.

In the CLI this is quite a considerable amount of typing. That’s why I created a small script for it:

 

I created a small helper script that encapsulates API calls for simple snapshot management. This can be used in cron jobs to trigger snapshot creation and cleanup.

Keeping manually created snapshots is the least backup security one should have on top of automatically created snapshots as the latter type is deleted together with the instance. In case of unintentional deletion of an RDS instance automated snapshots are of no help to restore the data.

RDS-Snapshot on GitHub

If you like to work with AWS on the CLI you can easily open the AWS by using a specially crafted link that logs you in using the credentials from your shell environment. To assemble the link you can use this little Python script:

Wanna know whether your ELB has at least one instance in service or not?

Just add a CloudWatch Alarm to your ELB and have it send status changes to SNS (from where you can route it to whatever notification system you’re using).

Due to popular demand (read: colleagues asking) I put this little script online which I originally found when googling the web for a way to reduce the count of commits that contain syntax errors in my Puppet recipes.

It will not find all errors (e.g. cyclic or broken requires) but it reduces the risk of overlooking copy&paste errors or typos a lot.

Update: I found the original source on GitHub

Have you ever tried working with a huge monolith of SVN repo that has lived for ages and seen a bazillion commits in its time? Then you might have experienced unpleasant times waiting for some process to chew through all that nasty pile of code.

Here is how I handle these behemoths – just checkout the subpath I need leaving all the history behind:

This code snippet will fetch the latest revision the subpath was edited and do a sparse shallow checkout using git as client.