Zero Downtime Deployments with Modern DevOps Tools

Deploying your applications with zero downtime isn’t just a dream; it’s a reality that you can and should be living in today. Gone are the days when you need to take the entire system down to deploy your updates, lose money throughout that time, and likely have a team implement those updates during off-hours.

Imagine the following situation: Every time you add features to your products, both big and small, those features automatically get validated to your business’s specific requirements and deployed in the exact setup catered to your needs – all with complete confidence that your product will be 100% available that entire time. No more downtime, no more off-hours, no more loss of revenue. The best part of all is that you and your development team can start implementing this architecture right now with modern tools that are available to everyone.

In this talk, we’ll zone in on the tools behind this revolution: Kubernetes, CI/CD, Docker, and more. At Clevyr, we use these and other modern tools daily to facilitate our workflows, and we continuously strive to make optimizations as new tools and patterns hit the floor. Our speakers, Aaron Krauss and Gabe Cook, will give you a head start you need to implement zero-downtime deployments for your organization.

InnoTech 2021 Talk Link

Building an Efficient ETL Process with Laravel

Extract-Transform-Load (ETL) processes have been around for a long time in programming, and they’re not too challenging to understand: you take data from one or many sources (legacy DB, SAP, third-party data lake, spreadsheets, etc.), transform it to fit your own application’s workflows, and then load it into your own data sources. You can use ETL processes for syncing data across sources, migrating legacy data, aggregating multiple data sources, and more.

As you might expect, ETL processes involve a lot of data, and it’s up to you to ensure that you’re building a nice, lean ETL process. No one wants an ETL process that takes hours to run if it’s possible to run it in minutes. In this live-demo talk, we’ll use Laravel to implement an ETL process where we start with a functioning yet inefficient scenario and step-by-step turn it into a lean, mean ETL process. No prior Laravel knowledge is necessary, just an open mind. We’ll review writing efficient database queries, architecting your code to avoid common inefficient pitfalls, and more.

Talk: Using Terraform, Packer, and Ansible Together

There’s a good chance that you have projects running on a server somewhere. What happens if that server gets accidentally erased, or if you need to spin up an identical server? Even if you have backups, you’ll still need to spend precious time setting things back up the way you had them – and there’s no telling if you’ll get it exactly right. That’s where this powerful dev tooling combo comes into play. By using Ansible, Packer, and Terraform, you can automate this entire process, getting as granular as you need to be.

In this talk, we’ll review what Ansible, Packer, and Terraform are individually, as well as how you can use them together. As a demo, we’ll be automating the creation and deployment of a digital ocean droplet that’s pre-configured to run a fun personal-project website.

Installing the Docker Client CLI on 32-bit Windows

If you’re unfamiliar with it, Docker is one of the newer development tools on the scene which takes the power of virtual machines to the next level through a process known as containerization. Containerization means instead of having a separate entire operating system behind each series of processes – as is the case with a virtual machine – each process should get its own lightweight and flexible container to run inside of. The containers all sit on top of the host’s own OS, so they take up significantly less space and processing power.

To use Docker, you need both a server running somewhere and a client to connect to that server. As of the writing of this article, Docker has always claimed to only run on a 64-bit processor, and that’s true – but only for the server. You can still run the Docker client CLI on a 32-bit OS, but it’s much more difficult to install than on a 64-bit OS. Difficult doesn’t mean impossible though, and I wanted to share how I got the Docker CLI running on my little 32-bit Windows 10 laptop.

Installing on 64-bit vs 32-bit

The go-to way to install the current version of Docker (which at this time is v1.9) on a Windows OS is through the Docker Toolbox. This is a very handy package which installs both the Docker server and client components. But wait – remember how I said that the server can’t run on a 32-bit OS? That’s absolutely true, and for that very reason, the Docker Toolbox .exe file that gets downloaded is unable to run on a 32-bit Windows OS. You can’t get either the server or the client this way. Bummer.

How to Install the Docker Client

So are we out of luck? Well, that would be a pretty poor ending to this post, so I’m here to ease your nerves. It is possible to install the Docker client on 32-bit Windows – it’s just more difficult than downloading a simple installer file (and more fun). To install the Docker client, we’re going to manually install it through Chocolatey – a package manager for Windows. If you don’t have Chocolatey installed, you’ll need to open either an administrative command prompt session or an administrative PowerShell session.

To install Chocolatey through administrative command prompt, run this command:

To install it through an administrative PowerShell session, run:

Perfect. After running either of these commands, you’ll have access to Chocolatey both in PowerShell and through the command prompt. You can select either of them to use as you continue through this post. We now need to use Chocolatey to install the docker client – which is actually pretty simple:

And that’s it! You should now have access to all the normal Docker shell commands. If you don’t, then try closing out of your session and reopening it. Note: I was unable to install the Docker client through OneGet, which I thought was strange. OneGet is a manager of package managers, so to speak, so installing a package through OneGet will fire the Chocolatey install command – or a different package manager if you’re not installing a Chocolatey package. This should have worked just like the normal “choco install” command did, but it didn’t. I had to actually use Chocolatey to install it. No big deal, but I wanted to make sure I mentioned that.

If you run the following command, you should see the full list of commands you can run with your Docker client:

You can run the version command to see your current client’s version – but that’s about it. Why don’t any of the other commands work? It’s because we’re not linking our client up with a Docker server, which is where all of our container and image data will actually be stored. We can’t run the server on our current 32-bit Windows OS, so we’ll have to get it running somewhere remotely, and then link the connection in our client.

Running the Docker Server Remotely

There are tons of different ways that you can get the Docker server running remotely, and Docker itself makes this really easy by providing support for several drivers to run the server, such as AWS, Digital Ocean, VirtualBox, and many more. You can even create a 64-bit virtual Linux machine inside of your 32-bit Windows OS and install and run the Docker server on there; as long as both your Windows OS and the Linux OS are connected to the same network, then they can connect with each other.

Personally, I went the AWS route, and I want to show you how easy that is. Using the AWS driver to run the Docker server will create an EC2 instance under your account that installs and runs the Docker server; it’s also 100% secure, because it blocks every single port from being accessed except for the one that the server uses to establish its TCP connections to the clients. Now to start the Docker server using the AWS driver, you will need access to the docker-machine command, which you can’t get on 32-bit Windows. You’ll need either a 64-bit Mac, Linux, or Windows OS to get access to that command. It’s a pain, I know – but think about it this way: ideally, you’re not even supposed to be using a 32-bit machine with Docker at all, so everything we’re doing here is “beating the system.” That’s why we have to work for it.

On your other machine that has the docker-machine command working, run the following command:

There are 3 required options there – all of which you can read about on the AWS driver info page. Suffice it to say that they’re just used to authenticate you with your AWS account. Literally, after running this command with valid keys passed in, you’ll see Docker starting to install on the newly created EC2 instance, and soon it will be running. You can now run the following command on your non-32-bit-Windows OS to view the connection info of this Docker server:

Copying the Keys and Certificates

To connect your client to the Docker server, you need to generate keys and certs that handle authenticating your client both for TLS and SSH. This is an easy process if you have access to the Docker server via CLI – but we don’t. We only have access to the client, which makes this more difficult.

This would normally be one of those moments where we’re just out of luck, but there’s a fix for this that I worked my way through with some reverse-engineering. To set up the Docker server, we needed access to a non-32-bit-Windows OS, and when we set up the server, Docker automatically generated the necessary keys and certs to establish a connection. To find out where our Docker client stored these keys and certs, we run our environment command again for our particular docker machine (using the same one as defined above):

We’ll be referring back to all of these environment variables later in the post, but for now we specifically want to look at the DOCKER_CERT_PATH variable. This is the path through which Docker grabs the necessary certs and keys to connect to the aws-docker instance. Here’s what my reverse-engineering uncovered: if you copy that folder that’s listed in that variable onto your 32-bit Windows OS, and properly set the corresponding environment variable on that OS too, then your Docker client will successfully use that to authenticate with the server. This is how you get around needing to access the server to get your keys and certs.

So, zip up that folder, email/dropbox/sharefile/whatever it over to your 32-bit Windows OS, and put it somewhere. It doesn’t have to match that exact file path, but you can get close by putting the .docker folder in your HOME directory. Now, everything’s in place; we just need to set up the environment variables and we’ll be good to go.

Adding the Environment Variables

To know where the Docker server is running, as well as its name and a few other config options, Docker looks to environment variables that are defined through your CLI. I prefer to do this through PowerShell on Windows, so that’s what my following examples will be using. To see all of your current environment variables, enter the following command:

Specifically, we need to set 4 environment variables that Docker uses, and they’re the 4 listed above in the previous section. Here’s an example of what they should look like:

Your specific Docker variables, such as host, path, and name, will be different – so keep that in mind. Use the same environment variables that your non-32-bit-Windows OS showed. We need to set each of these variables manually, just as is shown here, and to do that we run:

You should be able to see how you would substitute the variable name and value for each of the 4 variables we need to set. After you do this, if everything was set up properly, you should have a fully-functioning Docker client that is communicating with your remote Docker server. You can test it by running:

This shows that there are 0 containers running on your server, so start one up!

Final Thoughts

Docker’s pretty serious about not wanting you to install the software on non-64-bit machines, which you can see by all the hoops we had to jump through to get it working. Even with it working, let’s say that we want to restart our server instance, create a new one, or regenerate certs and keys. We can’t do that, because we don’t have access to the server from this machine. We strictly can only do things that the client can handle, such as managing images and containers, building new containers from a Dockerfile, attaching to running containers, etc. On top of that, if our Docker server ever does regenerate certs or change URLs, or if we need to link up to a new server instance, it’s a pretty big pain to do that.

So, in the end, going through this process may not be worth it to you, but if you have a 32-bit Windows OS lying around and you want to experience the power of Docker containers on that bad boy – then I hope this guide has helped you a little bit.

Controlling Spotify with Slack and a Raspberry Pi

After moving to a newly constructed floor at Staplegun (where I work), the developers (all 4 of us) chose to switch to an open floor-plan. One of the big updates included in this move was that we now had a shared audio system with speakers all around, and with us working in very close proximity to one another, it became very important for each of us to easily be able to control the music selection. The sound system had no “smart” attributes or network connectivity, so at the most basic level, we could have just hooked up an audio cable from our phones to the auxiliary input and played music that way – but our sound system hub is in our server room, which is nowhere near where we work, so that quickly got thrown away as a plausible option. Other than hooking up a bluetooth connector or some other third-party-connection widget with cables going into the speaker, we were pretty much out of luck. Or so we thought.

We realized we had a spare Raspberry Pi lying around, which has an audio output as well as an ethernet cable input. Theoretically, we could somehow connect to the Pi over our network and stream music from the Pi. Now “how” was the big question. On top of that, we all use Slack heavily at work, so could we take it one step further and control our music selection via Slack? Sounds farfetched, I know – but that’s exactly what we did, and I want to show you how you can do it too.

Prerequisites

As you’re following along, there are a few things you need in order to build everything in this post:

  • You need a premium Spotify account (need this to get API access).
  • You need a Raspberry Pi (preferably at least a Pi 2, but any Pi should work).
  • You need a speaker to connect to your Pi.
  • Your Pi needs internet access, either wirelessly or via ethernet cable.
  • You need Node.js v0.10.x and libspotify installed on the Pi.

That last one is very important – the library we’re going to use doesn’t work with later versions of Node (hopefully this gets updated in the future). All set? Good, let’s get to it.

Getting Everything Set Up

To allow our Slack channel to make requests to our Pi, and then for our Pi to make requests to Spotify, we need to use a package called crispyfi. Navigate to your desired folder on your Pi, and clone the crispyfi repo:

After you get this cloned, there’s quite a process you’ll have to go through to get the “Slack to Pi to Spotify” communication chain going; it’s very well documented on the crispyfi readme, so I’ll direct you there to get things set up, but in a nutshell, this is what you’ll need to do:

  • Sign up for a Spotify app and get a Spotify key file (you need a premium membership to do this).
  • Continue with crispyfi’s documentation on where to add in your Spotify username, password, and key file.
  • Create a custom outgoing webhook integration in Slack and set the trigger words to play, pause, stop, skip, list, vol, status, shuffle, help, reconnect, mute, unmute.
  • You can name your webhook (we called our’s jukebox), give it an emoji icon, and select if the webhook should listen globally on all channels. At Staplegun, we only have this webhook listening on a single channel that’s dedicated to controlling music.
  • Don’t worry about the webhook’s URL field for now – we’re going to edit that later (you’ll still probably need to fill it in with some dummy data though) – and make sure to copy the token that Slack gives you.
  • Add the Slack token in crispyfi’s config.json file.

The idea here is that whenever you chat one of the trigger words in a channel, the outgoing webhook will fire and make a POST request to your designated URL (which we haven’t set yet) including the specific message that triggered it. That POST request will hit the crispyfi server we’re going to run, which will handle all communication to Spotify and back. The Pi will stream music from Spotify and send it to the audio output port, which you would hook up to a speaker.

Once we’ve added all of our config data into our crispyfi project, we can install the dependencies and spin up the server on port 8000:

If you have everything set up properly, then you should see output stating that crispyfi successfully logged into Spotify with your credentials. Now here’s a problem: we have the server running, but our Slack webhook can’t reach it because our Pi doesn’t have a static IP. To get around this, we can use a wonderful library called ngrok which will expose any port on our localhost to the outside world by providing an ngrok URL. Install ngrok via NPM and then run it for port 8000:

This will take over your terminal pane and provide you with a URL such as http://10c06440.ngrok.com. This is the URL we want our Slack webhook to have – followed by the /handle route. So go back to Slack, edit your webhook, and change the URL to be:

You’ll have a different ngrok URL, so you’ll need to swap the above URL with the one that you’re provided. If you’ve done everything correctly, then your Slack should now fully be able to control your music selection through your Spotify account!

crispyfi-screenshot

Taking It a Step Further

Crispyfi is a great service – but it currently only works with Spotify URIs. That means you can’t play music based on a search for title, artist, album name, etc. – you have to copy the exact URI from Spotify to play a certain song or playlist. We wanted to add this “music query” feature at Staplegun, and we were able to pretty easily get it through a hubot script called hubot-spotify-me.

If you use Slack at work – or any other instant messaging application – and you don’t use hubot, then I highly recommend you check it out. Not only is it a fun bot that can make your team interactions more lively, but you can program it with some sweet scripts that really boost productivity; that in itself is a topic that warrants its own blog post, so I’ll just stick to discussing the hubot-spotify-me script for now.

If you install this script, then you can trigger it in Slack with the following format:

And it will return to you a spotify URL. If we convert this into a spotify URI (which is simple to do), then all we’re missing is the trigger word play in order to automatically issue a webhook request to our crispyfi server to play this song. Well – there’s no simple way to edit the hubot script to reformat the spotify URL and prefix it with the word play, so we’ll have to actually edit some code here. Here’s the exact file path and changes you need to make:

hubot-spotify-changes

After you make these changes and deploy them to hubot – you’re good to go! Your new-and-improved spotify hubot command will look like this:

And this will trigger your outging webhook to perform a request to your crispyfi server! Boom!

Final Thoughts

This setup is really powerful, and after you get it all in place, you definitely deserve a few beers. There’s a lot of devops work going on here, which is tough stuff. While it’s a really awesome service to have going for our personal team, there are a few things I don’t like.

  • Crispyfi uses libspotify, which is currently the only way to make CLI requests to the Spotify API. Spotify has openly stated that libspotify isn’t actively maintained anymore – BUT they haven’t released an alternate library to take its place yet. How they stopped supporting something without providing a replacement is beyond me – but that’s how it is right now.
  • Crispyfi itself isn’t super maintained either, with a majority of the commits having occured during a few-month period at the end of 2014. Still, it’s the only valid library we could find that accomplished what we needed, and it sure beat spending the several man-hours to build the same thing ourselves!

Even with these concerns, this setup is a game changer. To fully control all of our music (play, pause, control volume, manage playlists, etc.), we now just issue commands in a Slack channel, and it happens instantly. There’s no single way that works better for us, and I bet you’ll discover the same thing too for your team. Plus – this way we can Rick Roll our team if one of us is working from home!

Power Tools: Using Grep, Xargs, and Sed

I was recently inspired to write this post after I came across a situation where I needed to edit multiple files and change all instances of one word to another (in this case I was changing the word vim just to v). While this sounds like a simple task, let’s break this up for a second to see what’s all entailed: We’re having to filter the files that contain this word, then we need to spin through each of these files and open them up individually, modify them, and rewrite the file inline to the same filename. It may still sound simple, but we do have a lot of moving parts going on here.

Many high-level text editors and IDE’s have the ability to do this for you, which is certainly nice, but what happens if you’re in an environment where you don’t have access to those tools? You may say that you’ll never work away from your personal machine, but it’s very possible you could log into a VPS or ssh into another user’s machine where all you have access to are terminal tools. Additionally, the need to do this is not necessarily developer-specific, so if you’re a systems administrator for example, you easily might not have higher-level editors installed – but you probably have some shell skills. That’s where three tools come in that are included in the base shell languages we use today: grepxargs, and sed.

You easily could ahve heard of these before and already know how to use them, and if so, then carry on friend! You’ve probably nothing more to gain here. But if you’d like to know just a little bit about how to use them, read on.


Grep

Grep is base unix search command which will spin through one or many files in order to tell you which files contain your phrase and a little info about where they are located. Here’s an example of a standard way to use grep:

This would print out each line in index.html that contained the word footer. You can also search for phrases that include spaces by surrounding the phrase with quotation marks (they won’t count as part of the search query). Or you can use grep as a sole command, and not pipe anything to it:

This would print out each line in every text file in the current directory that contained the phrase “this is a phrase.” Additionally, if we’re searching through multiple files, we can pass in the -l tag to get just the filenames. Grep also has support for regular expressions which can be used with the -G option:

This would find all instances of a line that ends in ‘ngrok *000’ where the * represents any digit, and only the filenames will be printed out. Grep can do much, much more than this, but using as shown here is probably the most common. Other search tools such as Ack and Ag exist that are geared towards filtering source code, but I wanted to stick with grep since it’s a common tool that exists on all *nix systems.

Xargs

Xargs is an awesome command which basically has one job – you give it a command, and it runs that same command multiple times for a certain number of arguments that you give it. If you’re a programmer, think of it as a loop that executes through a list. Per the man page of xargs, it takes delimited strings from the standard input and executes a utility with the strings as arguments; the utility is repeatedly executed until standard input is exhausted.

Sound too wordy? An example is worth a thousand words:

This will run run the echo command as many times as you have files in the current directory, and it will pass in the filename (piped in by the ls command) to the echo command, so that it will echo each individual file name. The -0 option forces xargs to be line-oriented, and therefore it will accept arguments based on a full new line (this is very important; you probably don’t want xargs breaking up args based on spaces in the same line). The -n 1 option is used to tell xargs that you want to split the arguments to call only one argument per command iteration. If you specified 2, then you would echo 2 filenames on the same line, and if you leave out the option altogether, then you will just echo once, listing every filename on the same line.

By default, xargs adds in the arguments at the end of the command call, but what if we need to use that argument at the beginning or the middle of the line? Well, that’s completely doable with the -I option.

Now xargs will no longer defaultly pass in the argument at the end of the line, and we instead have a placeholder for our arguments that we can use wherever we please for our command.

Pretty simple. Xargs does have some more options, but this is the crux of what you use it for: splitting up incoming arguments to be used as a part of another command.

Sed

Sed, just like xargs, has one job that it does very well. Short for stream editor, sed is a handy little command which will read one or more files (or standard input, if no file is given), apply changes to those files based on a series of commands, and then write the output either in place of the file or to the standard output. How this applies to the user is that you can very easily and quickly replace text in multiple files with this one command. Here’s a simple example:

This will spin through every file in the current directory and replace every instance of the word start with end, but it will write the output to the standard output and not update the actual files. If we wanted to open up the files, make the changes, and then save them in place (probably how you want to use sed), then we just need to throw in one little option:

The -i option states that we want to write the files in place and save the backups to the same filename appended by a certain extension. By passing in empty quotes, we skip saving the backups and are only left with the changes to our files. This tool is very powerful; it probably doesn’t seem like you’re doing much – but when you can change every instance a phrase to another phrase in 100+ files at a time, with a command under 20 characters, it’s crazy to think about. Now with great power comes great responsibility. Due to its simplicity, it’s easy to get carried away with things or not double check yourself. There’s no undo here, so if you do use sed, make sure you do a dry run without the -i option first, and it would be even better if you make these changes in a versioned environment (using something like git) so you can revert changes if you need to.

Combining Them

By combining these three small commands that are common across all *nix systems, we can do some pretty powerful text replacement. Most of the action comes from using sed, but the other commands help gather and prepare everything. So let’s put together what we’ve learned into a single command that we can actually use:

Look familiar at all? This was the command I mentioned at the beginning of the post that I ran to change all instances of vim to just be v instead. It’s true, for this particular situation, I could have gotten away with using only sed, but that’s only because I was searching for the exact term that I was wanting to change. If I wanted to search for all the files that had the phrase Hallabaloo, but still wanted to change the word vim to v, then I would need to write a full command like this.

So will you always need to run a command like this? No, but you probably will at some point, and even if you have an easier way to do it than remembering this multipart command, I hope you’ve at least learned a little bit more about how you can use grep, xargs, and sed in your workflow.