Talk: Using Terraform, Packer, and Ansible Together

There’s a good chance that you have projects running on a server somewhere. What happens if that server gets accidentally erased, or if you need to spin up an identical server? Even if you have backups, you’ll still need to spend precious time setting things back up the way you had them – and there’s no telling if you’ll get it exactly right. That’s where this powerful dev tooling combo comes into play. By using Ansible, Packer, and Terraform, you can automate this entire process, getting as granular as you need to be.

In this talk, we’ll review what Ansible, Packer, and Terraform are individually, as well as how you can use them together. As a demo, we’ll be automating the creation and deployment of a digital ocean droplet that’s pre-configured to run a fun personal-project website.

Installing the Docker Client CLI on 32-bit Windows

If you’re unfamiliar with it, Docker is one of the newer development tools on the scene which takes the power of virtual machines to the next level through a process known as containerization. Containerization means instead of having a separate entire operating system behind each series of processes – as is the case with a virtual machine – each process should get its own lightweight and flexible container to run inside of. The containers all sit on top of the host’s own OS, so they take up significantly less space and processing power.

To use Docker, you need both a server running somewhere and a client to connect to that server. As of the writing of this article, Docker has always claimed to only run on a 64-bit processor, and that’s true – but only for the server. You can still run the Docker client CLI on a 32-bit OS, but it’s much more difficult to install than on a 64-bit OS. Difficult doesn’t mean impossible though, and I wanted to share how I got the Docker CLI running on my little 32-bit Windows 10 laptop.

Installing on 64-bit vs 32-bit

The go-to way to install the current version of Docker (which at this time is v1.9) on a Windows OS is through the Docker Toolbox. This is a very handy package which installs both the Docker server and client components. But wait – remember how I said that the server can’t run on a 32-bit OS? That’s absolutely true, and for that very reason, the Docker Toolbox .exe file that gets downloaded is unable to run on a 32-bit Windows OS. You can’t get either the server or the client this way. Bummer.

How to Install the Docker Client

So are we out of luck? Well, that would be a pretty poor ending to this post, so I’m here to ease your nerves. It is possible to install the Docker client on 32-bit Windows – it’s just more difficult than downloading a simple installer file (and more fun). To install the Docker client, we’re going to manually install it through Chocolatey – a package manager for Windows. If you don’t have Chocolatey installed, you’ll need to open either an administrative command prompt session or an administrative PowerShell session.

To install Chocolatey through administrative command prompt, run this command:

To install it through an administrative PowerShell session, run:

Perfect. After running either of these commands, you’ll have access to Chocolatey both in PowerShell and through the command prompt. You can select either of them to use as you continue through this post. We now need to use Chocolatey to install the docker client – which is actually pretty simple:

And that’s it! You should now have access to all the normal Docker shell commands. If you don’t, then try closing out of your session and reopening it. Note: I was unable to install the Docker client through OneGet, which I thought was strange. OneGet is a manager of package managers, so to speak, so installing a package through OneGet will fire the Chocolatey install command – or a different package manager if you’re not installing a Chocolatey package. This should have worked just like the normal “choco install” command did, but it didn’t. I had to actually use Chocolatey to install it. No big deal, but I wanted to make sure I mentioned that.

If you run the following command, you should see the full list of commands you can run with your Docker client:

You can run the version command to see your current client’s version – but that’s about it. Why don’t any of the other commands work? It’s because we’re not linking our client up with a Docker server, which is where all of our container and image data will actually be stored. We can’t run the server on our current 32-bit Windows OS, so we’ll have to get it running somewhere remotely, and then link the connection in our client.

Running the Docker Server Remotely

There are tons of different ways that you can get the Docker server running remotely, and Docker itself makes this really easy by providing support for several drivers to run the server, such as AWS, Digital Ocean, VirtualBox, and many more. You can even create a 64-bit virtual Linux machine inside of your 32-bit Windows OS and install and run the Docker server on there; as long as both your Windows OS and the Linux OS are connected to the same network, then they can connect with each other.

Personally, I went the AWS route, and I want to show you how easy that is. Using the AWS driver to run the Docker server will create an EC2 instance under your account that installs and runs the Docker server; it’s also 100% secure, because it blocks every single port from being accessed except for the one that the server uses to establish its TCP connections to the clients. Now to start the Docker server using the AWS driver, you will need access to the docker-machine command, which you can’t get on 32-bit Windows. You’ll need either a 64-bit Mac, Linux, or Windows OS to get access to that command. It’s a pain, I know – but think about it this way: ideally, you’re not even supposed to be using a 32-bit machine with Docker at all, so everything we’re doing here is “beating the system.” That’s why we have to work for it.

On your other machine that has the docker-machine command working, run the following command:

There are 3 required options there – all of which you can read about on the AWS driver info page. Suffice it to say that they’re just used to authenticate you with your AWS account. Literally, after running this command with valid keys passed in, you’ll see Docker starting to install on the newly created EC2 instance, and soon it will be running. You can now run the following command on your non-32-bit-Windows OS to view the connection info of this Docker server:

Copying the Keys and Certificates

To connect your client to the Docker server, you need to generate keys and certs that handle authenticating your client both for TLS and SSH. This is an easy process if you have access to the Docker server via CLI – but we don’t. We only have access to the client, which makes this more difficult.

This would normally be one of those moments where we’re just out of luck, but there’s a fix for this that I worked my way through with some reverse-engineering. To set up the Docker server, we needed access to a non-32-bit-Windows OS, and when we set up the server, Docker automatically generated the necessary keys and certs to establish a connection. To find out where our Docker client stored these keys and certs, we run our environment command again for our particular docker machine (using the same one as defined above):

We’ll be referring back to all of these environment variables later in the post, but for now we specifically want to look at the DOCKER_CERT_PATH variable. This is the path through which Docker grabs the necessary certs and keys to connect to the aws-docker instance. Here’s what my reverse-engineering uncovered: if you copy that folder that’s listed in that variable onto your 32-bit Windows OS, and properly set the corresponding environment variable on that OS too, then your Docker client will successfully use that to authenticate with the server. This is how you get around needing to access the server to get your keys and certs.

So, zip up that folder, email/dropbox/sharefile/whatever it over to your 32-bit Windows OS, and put it somewhere. It doesn’t have to match that exact file path, but you can get close by putting the .docker folder in your HOME directory. Now, everything’s in place; we just need to set up the environment variables and we’ll be good to go.

Adding the Environment Variables

To know where the Docker server is running, as well as its name and a few other config options, Docker looks to environment variables that are defined through your CLI. I prefer to do this through PowerShell on Windows, so that’s what my following examples will be using. To see all of your current environment variables, enter the following command:

Specifically, we need to set 4 environment variables that Docker uses, and they’re the 4 listed above in the previous section. Here’s an example of what they should look like:

Your specific Docker variables, such as host, path, and name, will be different – so keep that in mind. Use the same environment variables that your non-32-bit-Windows OS showed. We need to set each of these variables manually, just as is shown here, and to do that we run:

You should be able to see how you would substitute the variable name and value for each of the 4 variables we need to set. After you do this, if everything was set up properly, you should have a fully-functioning Docker client that is communicating with your remote Docker server. You can test it by running:

This shows that there are 0 containers running on your server, so start one up!

Final Thoughts

Docker’s pretty serious about not wanting you to install the software on non-64-bit machines, which you can see by all the hoops we had to jump through to get it working. Even with it working, let’s say that we want to restart our server instance, create a new one, or regenerate certs and keys. We can’t do that, because we don’t have access to the server from this machine. We strictly can only do things that the client can handle, such as managing images and containers, building new containers from a Dockerfile, attaching to running containers, etc. On top of that, if our Docker server ever does regenerate certs or change URLs, or if we need to link up to a new server instance, it’s a pretty big pain to do that.

So, in the end, going through this process may not be worth it to you, but if you have a 32-bit Windows OS lying around and you want to experience the power of Docker containers on that bad boy – then I hope this guide has helped you a little bit.

Controlling Spotify with Slack and a Raspberry Pi

After moving to a newly constructed floor at Staplegun (where I work), the developers (all 4 of us) chose to switch to an open floor-plan. One of the big updates included in this move was that we now had a shared audio system with speakers all around, and with us working in very close proximity to one another, it became very important for each of us to easily be able to control the music selection. The sound system had no “smart” attributes or network connectivity, so at the most basic level, we could have just hooked up an audio cable from our phones to the auxiliary input and played music that way – but our sound system hub is in our server room, which is nowhere near where we work, so that quickly got thrown away as a plausible option. Other than hooking up a bluetooth connector or some other third-party-connection widget with cables going into the speaker, we were pretty much out of luck. Or so we thought.

We realized we had a spare Raspberry Pi lying around, which has an audio output as well as an ethernet cable input. Theoretically, we could somehow connect to the Pi over our network and stream music from the Pi. Now “how” was the big question. On top of that, we all use Slack heavily at work, so could we take it one step further and control our music selection via Slack? Sounds farfetched, I know – but that’s exactly what we did, and I want to show you how you can do it too.

Prerequisites

As you’re following along, there are a few things you need in order to build everything in this post:

  • You need a premium Spotify account (need this to get API access).
  • You need a Raspberry Pi (preferably at least a Pi 2, but any Pi should work).
  • You need a speaker to connect to your Pi.
  • Your Pi needs internet access, either wirelessly or via ethernet cable.
  • You need Node.js v0.10.x and libspotify installed on the Pi.

That last one is very important – the library we’re going to use doesn’t work with later versions of Node (hopefully this gets updated in the future). All set? Good, let’s get to it.

Getting Everything Set Up

To allow our Slack channel to make requests to our Pi, and then for our Pi to make requests to Spotify, we need to use a package called crispyfi. Navigate to your desired folder on your Pi, and clone the crispyfi repo:

After you get this cloned, there’s quite a process you’ll have to go through to get the “Slack to Pi to Spotify” communication chain going; it’s very well documented on the crispyfi readme, so I’ll direct you there to get things set up, but in a nutshell, this is what you’ll need to do:

  • Sign up for a Spotify app and get a Spotify key file (you need a premium membership to do this).
  • Continue with crispyfi’s documentation on where to add in your Spotify username, password, and key file.
  • Create a custom outgoing webhook integration in Slack and set the trigger words to play, pause, stop, skip, list, vol, status, shuffle, help, reconnect, mute, unmute.
  • You can name your webhook (we called our’s jukebox), give it an emoji icon, and select if the webhook should listen globally on all channels. At Staplegun, we only have this webhook listening on a single channel that’s dedicated to controlling music.
  • Don’t worry about the webhook’s URL field for now – we’re going to edit that later (you’ll still probably need to fill it in with some dummy data though) – and make sure to copy the token that Slack gives you.
  • Add the Slack token in crispyfi’s config.json file.

The idea here is that whenever you chat one of the trigger words in a channel, the outgoing webhook will fire and make a POST request to your designated URL (which we haven’t set yet) including the specific message that triggered it. That POST request will hit the crispyfi server we’re going to run, which will handle all communication to Spotify and back. The Pi will stream music from Spotify and send it to the audio output port, which you would hook up to a speaker.

Once we’ve added all of our config data into our crispyfi project, we can install the dependencies and spin up the server on port 8000:

If you have everything set up properly, then you should see output stating that crispyfi successfully logged into Spotify with your credentials. Now here’s a problem: we have the server running, but our Slack webhook can’t reach it because our Pi doesn’t have a static IP. To get around this, we can use a wonderful library called ngrok which will expose any port on our localhost to the outside world by providing an ngrok URL. Install ngrok via NPM and then run it for port 8000:

This will take over your terminal pane and provide you with a URL such as http://10c06440.ngrok.com. This is the URL we want our Slack webhook to have – followed by the /handle route. So go back to Slack, edit your webhook, and change the URL to be:

You’ll have a different ngrok URL, so you’ll need to swap the above URL with the one that you’re provided. If you’ve done everything correctly, then your Slack should now fully be able to control your music selection through your Spotify account!

crispyfi-screenshot

Taking It a Step Further

Crispyfi is a great service – but it currently only works with Spotify URIs. That means you can’t play music based on a search for title, artist, album name, etc. – you have to copy the exact URI from Spotify to play a certain song or playlist. We wanted to add this “music query” feature at Staplegun, and we were able to pretty easily get it through a hubot script called hubot-spotify-me.

If you use Slack at work – or any other instant messaging application – and you don’t use hubot, then I highly recommend you check it out. Not only is it a fun bot that can make your team interactions more lively, but you can program it with some sweet scripts that really boost productivity; that in itself is a topic that warrants its own blog post, so I’ll just stick to discussing the hubot-spotify-me script for now.

If you install this script, then you can trigger it in Slack with the following format:

And it will return to you a spotify URL. If we convert this into a spotify URI (which is simple to do), then all we’re missing is the trigger word play in order to automatically issue a webhook request to our crispyfi server to play this song. Well – there’s no simple way to edit the hubot script to reformat the spotify URL and prefix it with the word play, so we’ll have to actually edit some code here. Here’s the exact file path and changes you need to make:

hubot-spotify-changes

After you make these changes and deploy them to hubot – you’re good to go! Your new-and-improved spotify hubot command will look like this:

And this will trigger your outging webhook to perform a request to your crispyfi server! Boom!

Final Thoughts

This setup is really powerful, and after you get it all in place, you definitely deserve a few beers. There’s a lot of devops work going on here, which is tough stuff. While it’s a really awesome service to have going for our personal team, there are a few things I don’t like.

  • Crispyfi uses libspotify, which is currently the only way to make CLI requests to the Spotify API. Spotify has openly stated that libspotify isn’t actively maintained anymore – BUT they haven’t released an alternate library to take its place yet. How they stopped supporting something without providing a replacement is beyond me – but that’s how it is right now.
  • Crispyfi itself isn’t super maintained either, with a majority of the commits having occured during a few-month period at the end of 2014. Still, it’s the only valid library we could find that accomplished what we needed, and it sure beat spending the several man-hours to build the same thing ourselves!

Even with these concerns, this setup is a game changer. To fully control all of our music (play, pause, control volume, manage playlists, etc.), we now just issue commands in a Slack channel, and it happens instantly. There’s no single way that works better for us, and I bet you’ll discover the same thing too for your team. Plus – this way we can Rick Roll our team if one of us is working from home!

Power Tools: Using Grep, Xargs, and Sed

I was recently inspired to write this post after I came across a situation where I needed to edit multiple files and change all instances of one word to another (in this case I was changing the word vim just to v). While this sounds like a simple task, let’s break this up for a second to see what’s all entailed: We’re having to filter the files that contain this word, then we need to spin through each of these files and open them up individually, modify them, and rewrite the file inline to the same filename. It may still sound simple, but we do have a lot of moving parts going on here.

Many high-level text editors and IDE’s have the ability to do this for you, which is certainly nice, but what happens if you’re in an environment where you don’t have access to those tools? You may say that you’ll never work away from your personal machine, but it’s very possible you could log into a VPS or ssh into another user’s machine where all you have access to are terminal tools. Additionally, the need to do this is not necessarily developer-specific, so if you’re a systems administrator for example, you easily might not have higher-level editors installed – but you probably have some shell skills. That’s where three tools come in that are included in the base shell languages we use today: grepxargs, and sed.

You easily could ahve heard of these before and already know how to use them, and if so, then carry on friend! You’ve probably nothing more to gain here. But if you’d like to know just a little bit about how to use them, read on.


Grep

Grep is base unix search command which will spin through one or many files in order to tell you which files contain your phrase and a little info about where they are located. Here’s an example of a standard way to use grep:

This would print out each line in index.html that contained the word footer. You can also search for phrases that include spaces by surrounding the phrase with quotation marks (they won’t count as part of the search query). Or you can use grep as a sole command, and not pipe anything to it:

This would print out each line in every text file in the current directory that contained the phrase “this is a phrase.” Additionally, if we’re searching through multiple files, we can pass in the -l tag to get just the filenames. Grep also has support for regular expressions which can be used with the -G option:

This would find all instances of a line that ends in ‘ngrok *000’ where the * represents any digit, and only the filenames will be printed out. Grep can do much, much more than this, but using as shown here is probably the most common. Other search tools such as Ack and Ag exist that are geared towards filtering source code, but I wanted to stick with grep since it’s a common tool that exists on all *nix systems.

Xargs

Xargs is an awesome command which basically has one job – you give it a command, and it runs that same command multiple times for a certain number of arguments that you give it. If you’re a programmer, think of it as a loop that executes through a list. Per the man page of xargs, it takes delimited strings from the standard input and executes a utility with the strings as arguments; the utility is repeatedly executed until standard input is exhausted.

Sound too wordy? An example is worth a thousand words:

This will run run the echo command as many times as you have files in the current directory, and it will pass in the filename (piped in by the ls command) to the echo command, so that it will echo each individual file name. The -0 option forces xargs to be line-oriented, and therefore it will accept arguments based on a full new line (this is very important; you probably don’t want xargs breaking up args based on spaces in the same line). The -n 1 option is used to tell xargs that you want to split the arguments to call only one argument per command iteration. If you specified 2, then you would echo 2 filenames on the same line, and if you leave out the option altogether, then you will just echo once, listing every filename on the same line.

By default, xargs adds in the arguments at the end of the command call, but what if we need to use that argument at the beginning or the middle of the line? Well, that’s completely doable with the -I option.

Now xargs will no longer defaultly pass in the argument at the end of the line, and we instead have a placeholder for our arguments that we can use wherever we please for our command.

Pretty simple. Xargs does have some more options, but this is the crux of what you use it for: splitting up incoming arguments to be used as a part of another command.

Sed

Sed, just like xargs, has one job that it does very well. Short for stream editor, sed is a handy little command which will read one or more files (or standard input, if no file is given), apply changes to those files based on a series of commands, and then write the output either in place of the file or to the standard output. How this applies to the user is that you can very easily and quickly replace text in multiple files with this one command. Here’s a simple example:

This will spin through every file in the current directory and replace every instance of the word start with end, but it will write the output to the standard output and not update the actual files. If we wanted to open up the files, make the changes, and then save them in place (probably how you want to use sed), then we just need to throw in one little option:

The -i option states that we want to write the files in place and save the backups to the same filename appended by a certain extension. By passing in empty quotes, we skip saving the backups and are only left with the changes to our files. This tool is very powerful; it probably doesn’t seem like you’re doing much – but when you can change every instance a phrase to another phrase in 100+ files at a time, with a command under 20 characters, it’s crazy to think about. Now with great power comes great responsibility. Due to its simplicity, it’s easy to get carried away with things or not double check yourself. There’s no undo here, so if you do use sed, make sure you do a dry run without the -i option first, and it would be even better if you make these changes in a versioned environment (using something like git) so you can revert changes if you need to.

Combining Them

By combining these three small commands that are common across all *nix systems, we can do some pretty powerful text replacement. Most of the action comes from using sed, but the other commands help gather and prepare everything. So let’s put together what we’ve learned into a single command that we can actually use:

Look familiar at all? This was the command I mentioned at the beginning of the post that I ran to change all instances of vim to just be v instead. It’s true, for this particular situation, I could have gotten away with using only sed, but that’s only because I was searching for the exact term that I was wanting to change. If I wanted to search for all the files that had the phrase Hallabaloo, but still wanted to change the word vim to v, then I would need to write a full command like this.

So will you always need to run a command like this? No, but you probably will at some point, and even if you have an easier way to do it than remembering this multipart command, I hope you’ve at least learned a little bit more about how you can use grep, xargs, and sed in your workflow.

Getting Familiar with Bower and Browserify

Note: This post has been updated as of October 23, 2015.

Lately I’ve been getting into build automation quite a bit and trying to maximize my workflow productivity without having to worry about the not-fun things like ensuring that I’m including all my files, concatenating scripts together, and manually running build tasks. I’ve been using grunt for a while now, which has been key for speeding up my workflow when I’m working with new web projects, but I knew there was more out there to explore. I had heard bower and browserify thrown around on Twitter and at local dev meetings, and I knew that my fellow developers were making use of these tools, so I decided to check them out. Man … I’m glad I did, because these are tools that every full-stack developer should know about.

While bower and browserify aren’t necessarily related, I use them together quite a bit because they’re both geared specifically towards client-side development, and it’s this bond which makes them such a powerful combo. Let’s start off with a bio of what they both are:

Bower

Bower is a front-end package manager, and works similar to NPM or RubyGems. You can either install packages one-by-one with a simple

Or you can create a bower.json file in which you specify lists of packages and their versions that you want to fetch.  It gathers and installs packages from all over, taking care of hunting, finding, downloading, and saving the stuff you?re looking for. No longer do you need to manually download front-end packages from the source site or GitHub – now you just tell bower to do it. Install it with NPM:

Similar to NPM, bower will install all packages inside of a bower_components directory at the root of where you run the install command. Here’s what a sample bower.json would look like:

And would be installed with a simple

This will fetch the specificed version of jQuery, Modernizr, and Normalize-SCSS. Notice how there’s both javascript and sass in there? Bower isn’t language specific, so you can get javascript, css, sass, less, and much more. The files that bower retrieves are meant to be physically included into your project, so the bower_components directory is very clean and well structured.

So what makes bower any better than the other common package managers like NPM and RubyGems? Well, none of them are necessarily better than the other – they all handle specific types of packages. All three of these package managers allow you to list out your dependencies and versions, and will ensure that the full dependency tree is met. However, NPM and RubyGems are more geared towards server-side development and also allow the installation of global executable commands. Bower is much simpler in that it is only meant to find the front-end packages that you need, and dish them out for you.

Now that we’ve discussed how to gather our client-side packages in a clean, agile, and no-hassle manner, let’s talk about how we can build them all together and include just one bundle into our main html. Enter browserify.

Browserify

Browserify is a tool which, just like bower, gives your client-side workflow a serious improvement; this tool, however, is javascript specific. Browserify seems to be steeped in a lot of mystery and confusion, and a lot of developers stray away from it without really understanding the benefits. Browserify is honestly really simple; it only does two things for you:

  • It allows you to use node-style require() calls in your client-side javascript
  • It gives you a CLI to bundle those files together and compile them down into javascript that the browser can understand

That’s it! With browserify, you can write modular code the ‘node way’ while at the same time writing purely front-end code. Here’s how to install it:

And here’s an example file that we’ll eventually compile with browserify:

This file includes jQuery (required in a way that assumes it’s installed as a node package), as well as two external libs that I’m using. By setting jQuery to a variable, I am able to use the standard $ operator and have it only be accessible within the scope of this file. Because the other two files aren’t set to variables, they are loaded just within the general scope of the file, as if they had already been included in that page’s html.

By having these external files installed with bower, I can access their source files directly with the help of browserify. This is similar to using the @import function in sass, but because browserify accounts for modularity, these files will only be accessible in the scope that you require them.

Last but not least, let’s build this puppy:

This will run through our main.js file, gather all of the required files, and build it all into a file called bundle.js. This would be the file that you include in your html, and it will be written in browser-compatible javascript. That’s how you do node – the browser way.


So at this point, we’ve established a good footing on bower and browserify, both of which are tools geared towards making your front-end workflow as efficient and clean as possible. We also discussed how you can install vendor packages with bower and then include them directly into your javascript using browserify, allowing you to write modular front-end code. Now this is a big improvement over manually finding and downloading vendor packages from the internet and muddying up your html by including multiple libraries (not to mention ignoring the concept of scope altogether), but we can still improve on this workflow. After all, we’re having to manually run the browserify command every time we want to rebundle our files – and we don’t enjoy manual labor like that.

So what can we do? Well, I mentioned I’ve been getting into build automation lately, so I bet we can standardize this workflow and give instructions to a tool like grunt to do all the work for us. We covered our basics here, so next time we can get into maximizing our javascript building by incorporating a task runner (as well as a few other tricks I’ll show you).

Feel free to check out the next post in this series – Building Javascript with Grunt, Bower, Browserify.

How to Learn Vim

Finally, 7 months later, I’m following up with my first vim blog post about why you should use vim. If you’ve made it here, then you’re either seriously interested in learning vim (which would be awesome), or you just came here of your own random volition. Either works for me, but if you have heard of vim and are just a little bit hesitant to learn it, then fear no more. I’m going to teach you the best methods to learn vim.

Prerequisite: You must have vim installed if it isn’t already. You can do this through the homebrew, apt-get, yum, or any other package manager your system supports. You do not need graphical vim (GVim or MacVim).

Vim Tutor

vim_tutor

If you open up your shell, type in the command

This will open up the Vim Tutor, which is a nice little interactive program that teaches you how to use Vim. This is my preferred way to learn Vim, and if you are on your first go around, it will probably take about 30 minutes to complete. You don’t need any other resources – just your terminal (not even a mouse!).

When I initially learned vim, I completed this short course about 4-5 times the full way through. Naturally after the first time, you get much quicker, and the lessons become more of a refresher. I suggest using the Vim Tutor initially to see if you really want to learn vim, and if so, then continue using it to get familiar with the basics.

Note: Vim is not difficult to learn, but you will be slow for the first week or so. That’s natural. Roger Federer didn’t win Wimbledon his professional first year either.

Vim Golf

Vim Golf is a ruby gem which you can install and is a game-based method to learning vim. A common concept of vim is considering how many keystrokes you need to use in order to get something done; obviously, the less you use, the quicker you are, and therefore you want as few as possible. This the idea behind Vim Golf – you are trying to get a low keystroke score.

For installation and running, please check out their website. There you can see some of the challenges and other people’s scores, and the whole Vim Golf project is also on github. I haven’t personally used Vim Golf, but I know people who have, and they had great success with it.

Just Start Using It

There are a plethora of other tutorials out there for vim because people know it’s not the simplest thing in the world to grasp, but in my experience, once you start to understand the basics (which you will through the Vim Tutor) then I suggest just getting out there and trying to really use it as your editor.

Remember, you will be slow, and you will forget things. And certain things will seem more difficult than they should be at first (like copying a section of code for pasting), but trust me, if anything seems unnecessarily hard in vim, then there’s definitely an easier way to do it and I encourage you to Google it.

Once you get out there and really start using vim, give yourself 2 weeks to really see how you feel. If you’re on a huge project on a short deadline, then use your preferred editor to get your work done quickly, but make sure you don’t forget about using vim. It takes some practice to learn…but it is so incredibly rewarding.

 

Extras

Plugins

The vim community is very, very active and is completely focused on productivity. You can find vim channels on StackOverflow, as well as various twitter accounts created solely for publishing cool vim stuff.

Vim by itself is powerful, but relatively basic. You can add on so much to the base vim installation through plugins. For example, my editor has autocomplete (like Microsoft’s Intellisense), shortcuts based on what file type I’m in, custom color scheming, commenting shortcuts, git diff integration, auto coloring of hex values, and so much more.

These all can be found on Github, and I recommend using the powerful Vundle tool for downloading and installing plugins (very similar to a ruby Gemfile).

Plugin/Colorscheme Distributions

If you don’t want to worry about customizing your vim colorschemes and plugins, then guess what … you don’t have to! There are 2 massively popular vim distrubutions which come complete with multiple colorschemes and very useful plugins. The two are:

While I currently have my own set of vim customizations that sit on top of Thoughtbot’s minimal vim config, I previously used Janus for a little over a year. I really, really liked it, and it was the moment that I started feeling comfortable with vim that I started to check it out. Let me just say, my productivity skyrocketed.

Both distributions come with a base set of colorschemes and awesome plugins, and you can even add more plugins on top of that if you find some you’d like to use. I highly recommend using a vim distribution as your first step at getting into vim customization.

That’s it! I hope you start using vim, and if you do, tweet at me to let me know!