Talks: How Blocks Work in Ruby

Watch OKC Ruby, as Aaron Krauss speaks about Blocks.

Ruby has these things called ?blocks,? and they?re one of the key things that make it different from most other programming languages. Often times they take the place of a callback, a lambda function, or even a loop – but they have their own ruby-specific uniquenesses too. In truth, they?re a pretty neat feature that make the language much more intuitive to use – especially for developers new to the language. What exactly is a block, how do you use it, and why would you use it? We?ll answer all these questions in this talk, as well as briefly review Ruby?s other ?callable? structures, Lambdas and Procs.

Running a Rails App with Docker

Docker is quickly becoming the new cool-kid tool around town, and despite the fact that it’s still in rapid development, it has become stable enough over the past couple of years to the point where you can actually use it for some of your production apps. If you Google around, you’re bound to find plenty of tutorials that review how to get started with Docker and (insert your favorite language) – but I want to take it a step further. I don’t want to just show you how to run a Rails app with Docker; I want to build with you an actual use-case scenario where using Docker makes a ton of sense. See, where Docker truly excels is when you have multiple services that are all communicating together. These services can be practically anything: a web server, app server, database, background job processor, etc. – and when you start having a bunch of services all relying on each other, it makes development more difficult, especially collaborative development. What if my computer’s running a different version of one of these services than yours is? That could easily cause inconsistencies if we’re developing in a team environment. In order to eliminate these kinds of issues, incorporating something like Docker really becomes appealing.

To take things a step further, we’ll also be talking about docker-compose – a wonderful tool by the Docker team that helps manage the running of multiple containers.

Let’s talk a bit about what we actually want to build.

The Scenario

ERD over Rails Docker Blog demo

We want to build a simple Rails blog API with userpost, and comment resources; you can see the ERD of our planned database over to the right. Whenever we create a comment (i.e. make a successful POST request to /comments), we want to send an email to the creator of the post, letting them know a comment was created. However – we don’t want to send this email synchronously; we want to offload it to a background job processor to handle the actual sending of the email. Last but not least, instead of actually sending this email to a real person while in development, we just want to capture the email so that we can inspect and debug it. To do this, we’ll use a really cool tool called maildev which sets up an SMTP server to capture emails, as well as an HTTP server to allow us to view the emails.

This may sound like a lot going on – but don’t overthink it; we’re just sending an email when a comment is created. It just so happens that we have a few more services needed in order to make this happen – which is why this is a perfect scenario to showcase how powerful Docker is.

What is Docker?

Just to make sure we’re all on the same page, let’s briefly review what Docker actually is. Docker is a platform for running and managing what are called containers – i.e. lightweight pieces of software that are geared to running a single specific process. They’re sort of like virtual machines, but much, much smaller and they each have a specific job to do. Containers are also designed to be spun up and down very quickly, which helps make them appealing. In our above example, we would have separate services running for the Rails app, background job processor, maildev – and more.

Getting Started

Note: For the following code, we’ll be using Rails v5.0.2.

We’re ready to start building our app – but for this first segment, we’re not even gonna use Docker. There’s a little bit of setup for us to get through before we take that step.

Run the following command line commands to get our base database structure going (for a real app, we would use a serious database such as PostgreSQL, MySQL, etc.; for now, we’ll stick with the default SQLite).

Before we migrate our database, let’s add some simple seeds – just to get some dummy data in there:

Good, now we can migrate our database and run our seeds:

Once we have all of that set up, let’s go ahead and add the sidekiq gem to our Gemfile, because that’s what we’ll be using as our background job processor.

Sidekiq depends on Redis – which means we’ll be tying that into our setup later on as well!

Now install the gem:

We now have our app set up, so let’s go ahead and generate our mailer:

This will create the file app/mailers/comment_mailer.rb – which we’ll now edit to add in a new_comment mailer function:

As you can see, this mailer will send only to the author of the post that this comment is for. But exactly what does it send? That’s what we need to add next – our HTML and text email templates. First we need to create the templates:

And now we need to add some brief content into each of them:

Perfect, now that we’ve got our mailers set up – we actually need to update our comments_controller.rb to deliver that email (and specifically we need to update the create action):

The only line we need to add is line #9 where we make the CommentMailer call. Also, notice that we’re using the deliver_later method when we haven’t even hooked up a queuing system yet. This function tells ActionMailer to use ActiveJob to send out the emails – and if you don’t have any background job processor in place yet, then ActionMailer will just process the job synchronously. Fun fact.

Go ahead and start your server. As of right now, if you issue a POST request to /comments – such as this one below – then Rails will successfully create a comment and try to send an email.

You should see a successful response from your curl command, as well as Rails logging out its intent to send out an email. No emails will actually send right now because we haven’t hooked up any email settings such as SMTP credentials – but don’t worry, we’ll fix that in a bit. It’s time to start integrating what we currently have with Docker.

Adding in Docker

Note: From this point on, you’ll need to have both the Docker Engine and Compose installed. If you’re on Windows or Mac, you can just install the Docker Toolbox to get both (and more). Otherwise, you’ll need to install them separately.

To implement Docker, we first need to add in a Dockerfile so that we can build our Rails app as a Docker image. Add in the following Dockerfile to the root of your project:

There’s nothing special about this Dockerfile; at the time of this writing, you can find this exact example straight from the Docker site about how to create an image from a Rails app. We could now issue a “docker build” command to create this image – but let’s hold off on that. I mentioned in the intro that we’ll be using docker-compose to both build and run all of our images. To use docker-compose, however, we first need to add in another file called docker-compose.yml to the root of our project:

I won’t lie – it looks like there’s a lot going on here, but stay with me. All we’re doing here is preparing 4 different images (2 of which are built with this project’s Dockerfile) that will all be run at the same time – in the same network. That last statement’s really important, because networks in Docker are really cool. Since each container technically has its own unique address – it’s a little difficult for images to know how to communicate with each other. Docker networks are great because they will automatically resolve the hostname of a container based on what you name it – so that if you want to communicate with a container that’s titled sidekiq in the docker-compose.yml file, then you only have to specify its address by its name – “sidekiq”! You do still need to include any ports that the service is running on, though.

If this all sounds confusing – don’t worry, you’re not alone. This is difficult to get a grasp on the first time you see it (and many times after that, too) – but believe it or not, we’re almost done here, so let’s keep going.

Next, we need to add a few settings in our development.rb config file to set up SMTP to send to the Maildev container’s SMTP server:

The Maildev container’s SMTP server runs on port 25 – which is the default port for SMTP – and you can see here that we’re locating the server just by the string “maildev.” This works because when we run our setup using docker-compose, it creates a network which will resolve that hostname and send the request to the right container. Port 25 is also exposed by default on that container – but only to other containers in the network; you can’t access it outside of the Docker network.

Now, we need to tell ActiveJob (Rails’ built-in background job wrapper) that we want to use the sidekiq adapter to queue up our jobs. This officially throws Sidekiq into our application:

Finally – there’s one last thing; sidekiq by default assumes that redis is running on localhost:6379 – but since the redis server is running in a different container than our Rails app, we need to change this. We need to instead direct our redis traffic to the actual redis container, which we can easily do by using the hostname “redis.” To do that – we just need to add a simple initializer:

And that’s it – we’re done!

Running our Application

This part’s super easy; we’ve set everything up, and now we just need to run our app with docker-compose!

This will handle building all of the images that are custom, as well as pulling down and installing the other images from Docker Hub. Our app is officially up and running now with Docker, so let’s test it with the same curl command we had before:

If everything’s set up properly, you should get a successful JSON response that includes the comment record you just created. This command still works because we’re mapping port 3000 of our Rails app container to port 3000 on the Docker host (which is our local machine) – so we can communicate with it the same way we did before. After you issue that POST request, jump to http://localhost:1080 to see Maildev in action – and you can see the exact email you just created!

Example of Maildev


Final Thoughts

While Docker is definitely a hot topic right now, will it stand the test of time? Who knows – but the concepts behind containerization are here to stay – that much we know for sure, and Docker is really helping to push that movement forward. If you enjoyed this post about Docker and want to check out how companies use containerization in the real world, you should read more into the microservice architecture. Microservices have been around for a while, but with the popularization of Docker (and containers in general), it’s being talked about a lot more as a viable architecture for even small- to mid-size projects.

You’ve now got the knowledge, so when you get a chance, play around with Docker and see if it’s right for your Rails app. The answer could be yes – or it could be no, and either is okay! Docker is a neat tool, but only you can decide if it fits your project’s needs.

P.S. If you’d like to pull down the code we discussed here, check out the demo based on this blog post.

Talk: Python vs Ruby

Python and Ruby share a lot of similarities both syntactically and by design – probably moreso than most other common languages you hear about today – but they’re clearly different languages with different ways of doing things.

In this talk, you’ll hear Aaron Krauss (aka “The Ruby Guy”) talk about how Ruby works compared to Python. Most of the presentation will be live-coding driven, and if you have specific questions about how Ruby works, Aaron will be more than happy to interactively answer them and code them out. This talk will be completely unbiased – which means you should come with that mentality in mind as well! Neither Ruby nor Python are “better” than the other – they’re just different, and that’s what we’ll be focusing on!

Building a JSON API with Rails – Part 6: The JSON API Spec, Pagination, and Versioning

Throughout this series so far, we’ve built a really solid JSON API that handles serialization and authentication – two core concepts that any serious API will need. With everything we’ve learned, you could easily build a stable API that accomplishes everything you need for phase 1 of your project – but if you’re building an API that’s gonna be consumed by a large number of platforms and/or by a complex front-end, then you’ll probably run into some road blocks before too long. You might have questions like “what’s the best strategy to serialize data?,” or “how about pagination or versioning – should I be concerned that I haven’t implemented any of that yet?” Those are all good questions that we’re going to address in this post – so keep following along!


Active Model Serializers – my go-to Rails serialization gem of choice – makes it so simple to control what data your API returns in the body (check out my post on Rails API serialization to learn more about this topic). By default, however, there’s very little structure as to how your data is returned – and that’s on purpose; AMS isn’t meant to be opinionated – it just grants you, the developer, the power to manipulate what your Rails API is returning. This sounds pretty awesome, but when you start needing to serialize several resources, you might start wanting to follow a common JSON response format to give your API a little more structure as well as making documentation easier.

You can always create your own API response structure that fits your project’s needs – but then you’d have to go through and document why things are the way they are so that other developers can use the API and/or develop on it. This isn’t terrible – but it’s a pain that can easily be avoided because this need has already been addressed via the JSON API Spec.

The JSON API spec is a best-practice specification for building JSON APIs, and as of right now, it’s definitely the most commonly-used and most-documented format for how you should return data from your API. It was started in 2013 by Yehuda Katz (former core Rails team member) as he was continuing to help build Ember.js, and it officially hit a stable 1.0 release in May of 2015.

If you take a look at the actual spec, you’ll notice that it’s pretty in-depth and might look difficult to implement just right. Luckily, AMS has got our back by making it stupid-simple to abide by the JSON API spec. AMS determines JSON structure based on an adapter, and by default, it uses what’s called the “attributes adapter.” This is the simplest adapter and puts your raw data as high up in the JSON hierarchy as it can, without thinking about any sort of structure other than what you have set in the serializer file. For a simple API, this works; but for a complex API, we should use the JSON API spec.

To get AMS to use the JSON API spec, we literally have to add one line of code, and then we’ll automatically be blessed with some super sweet auto-formatting. You just need to create an initializer, add the following line, and restart your server:

Let’s do a quick show-and-tell, in case you want to see it in action before you try it. Assuming we have the following serializer for a post:

Then our response will go from this:

to this!

The JSON API spec also sets a precedent for how paginated resource queries should be structured in the url – which we’re getting to next!


Pagination prevents a JSON response from returning every single record in a resource’s response all at once, and instead allows the client to request a filtered response that it can continue querying on as it needs more data. Pagination is one of those things where every project seems to do it differently; there’s very little standard across the board – but there is in fact a best practice way to do it in a JSON API. A paginated resource on the server should always at a minimum tell the client the total number of records that exist, the number of records returned in the current request, and the current page number of data returned. Better paginated resources will also create and return the paginated links that the client can use (i.e. first page, last page, previous page, next page), but they tend to do that in the response body – and that’s not good. The reason this is frowned upon is because while dumping pagination links in the response body may be easy, it really has nothing to do with the actual JSON payload that the client is requesting. Is it valuable information? Certainly – but it’s not raw data. It’s meta-data – and RFC 5988 created a perfect place to put such paginated links: the HTTP Link header.

Here’s an example of a link header:

That might seem like a large HTTP header – but it’s blatantly obvious what’s going on, and we’re keeping our response body clean in the process. Now, just like with the JSON API spec, you might be asking if you have to manually add these links in when returning any paginated response – and the answer is no! There are gems out there that do this automatically for you while following best practices! Let’s get into the code.

To start with, we’ll need to use one of the two most popular pagination libraries in Rails: will_paginate or kaminari. It literally doesn’t matter which we pick, and here’s why: both libraries take care of pagination – but they’re really geared towards paginating the older styles of Rails apps that also return server-side rendered HTML views, instead of JSON. On top of that, neither of them follow the best practice of returning paginated links in the Link header. So, are we out of luck? No! There’s a wonderful gem that sits on top of either of these gems called api-pagination that takes care of what we need. Api-pagination doesn’t try to reinvent the wheel and create another implementation of pagination; instead, it uses either will_paginate or kaminari to do the actual logic behind pagination, and then it just automatically sets the Link header (as well as making the code changes that you as the developer have to make much, much simpler).

We’ll use will_paginate with api-pagination in this example. For starters, add this to your Gemfile:

Next, install them and restart your server:

Let’s update our Post controller to add in pagination. Just like with the JSON API spec above, we only have to make a single line change. Update the post_controller’s index action from this:

to this:

Do you see what we did? We just removed the render function call and instead added the paginate function call that api-pagination gives us. That’s literally it! Now if you query the following route, then you’ll receive a paginated response:


You’ll notice that after all my babbling about putting paginated links in the HTTP header instead of the response body, they still managed to find themselves in the response body! This is a neat feature of AMS if you’re using the JSON API adapter; it will recognize if you’re using either will_paginate or kaminari, and will automatically build the right pagination links and set them in the response body. While it’s not a best practice to do this – I’m not too worried about removing them because we’re still setting the HTTP Link header. We’re sort of in this transition period where many APIs are still placing paginated links in the response body – and if the AMS gem wants to place them in there with requiring no effort from the developer, then be my guest. It may help ease the burden of having new clients transition to parsing the Link header.

Now, here’s a little caveat. The JSON API spec has a preferred way of querying paginated resources, and it uses the page query object to do so, like in this example:

This query is identical to our query above; we just swapped out per_page for page[size], and page for page[number]. By default, the links that AMS creates follow this new pattern, but api-pagination by default doesn’t know how to parse that. Don’t worry though, it’s as easy as just adding a simple initializer to allow api-pagination to handle both methods of querying for paginated resources:

And wallah – add this initializer, restart your server, and now your API can handle paginated query params passed in as either page/per_page, and page[number]/page[size]!


The last best practice topic we’ll be covering here is how to properly version your API. The concept of versioning an API becomes important when you need to make non-backwards-compatible changes; ideally, an API will be used by various client applications – and it’s unfeasible to update them all at the same time, which is why your API neds to be able to support multiple versions simultaneously. Because you don’t really need a solid versioning system early-on in the development phase, this is often an overlooked topic – but I really implore you to start thinking about it early because it becomes increasingly more difficult to implement down the road. Spend the mental effort now on a plan to version your API, and save yourself a good deal of technical debt down the road.

Now that I’ve got my soap box out of the way, let’s get down to the best practices of implementing a versioning system. If you Google around, you’ll find that there are two predominant methodologies to how you can go about it:

  • Version in your URLs (e.g. /v1/posts)
  • Version via the HTTP Accept header

Versioning through your URLs is the easier of the two to understand, and it’s got a big benefit: it’s much easier to test. I can send you a link to a v1 path as well as a v2 path – and you can check them both out instantaneously. The drawback however – which is why this way isn’t a best practice – is because the path in your URL should be completely representative of the resource you’re requesting (think /posts, /users/1, etc.), and which version of the API you’re using doesn’t really fit into that. It’s important – sure – but there’s a better place to put that information: the HTTP Accept header.

The Accept header specifies which media types (aka MIME types) are acceptable for the response; this is a perfect use-case for specifying which version of the API you want to hit, because responses from that version are the only ones that you’ll accept!

For our demo, we’re going to specify the version in a custom media type that looks like this:

Here, you can easily see how we set the version to v1 (If you’d like to know how we got this format of media type, check out how MIME vendor trees work). If we want to query v2, then we’ll just swap out the last part of that media type.

Let’s get to some implementation. We won’t need any new gems, but there are a couple of things we do need to do first:

  • Move all of the files in our app/controllers directory into a v1 directory. So the full path of our controllers would then be app/controllers/v1.
  • Move all of the code in our controllers into a V1 module. That looks like this:

  • Wrap all of our routes in a scope function call, and utilize an instantiated object from a new ApiConstraints class that we’ll add in (this will filter our routes based on the Accept header).

We still need to add in the code for our ApiConstraints class, but you can kind of see what’s going on here. We’re specifying that this set of routes will specifically handle any v1 calls – as well as being the default routes, in case a version isn’t specified.

The constraints option in the scope function is powerful and it works in a very specific way: it accepts any sort of object that can respond to a method called matches?, which it uses to determine if the constraint passes and allows access to those routes. Now for the last step; let’s add the logic for ApiConstraints. To do this, we’re going to add a file in the /lib directory called api_constraints.rb:

You an see here that all this class does is handle the matches? method. In a nutshell, it parses the Accept header to see if the version matches the one you passed in – or it will just return true if the default option was set.

If you liked this neat little constraint – then I’m glad, but I take zero credit for this logic. Ryan Bates did a really great RailsCast over versioning an API a few years ago, and this is by-the-books his recommendation about how to parse the Accept header.

You’re now all set up with the best practice of specifying an API version via the Accept header! When you need to add a new version, you’ll create new controllers inside of a version directory, as well as add new routes that are wrapped in a versioned constraint. You don’t need to version models.

Final Thoughts

We covered a lot, but I hope it wasn’t too exhausting. If there’s one common goal towards building a best-practice JSON API, it’s to use HTTP as it’s meant to be used. It’s easy to dump everything in your response body in an unorganized manner – but we can do better than that. Just do your best to follow RESTful practices, and if you have any questions about what you’re doing, then don’t be afraid to look it up; the Internet will quickly guide you down the right path.

Building a JSON API with Rails – Part 5: Afterthoughts

This post has been a long time coming, but I wanted to address some topics about building a JSON API with rails that didn’t fully fit into the actual building process of our API. If you’re unfamiliar about building a JSON API with rails at all, then I’ll direct you to the very first post in this series and you can start there. In this final post, I wanted to discuss some topics such as testing, CORS, filtering data, nested vs. flat routing architecture, and more. Basically things that I find valuable to know about as I build my own rails APIs. Let’s get to it!

Flat vs Nested Routing Architecture

Which is better to use – flat or nested routes? I get asked this question quite a bit, and before I get into it, let me demonstrate what each route type means. Let’s say that I have a comment with an ID of 4. Whether I use nested or flat routes, my GET request to this endpoint would look like this:

And that’s it – very easy. Now let’s take that up a notch. What if I want to find all comments that exist for a certain post, and that post has an ID of 1. Here is where we deviate between these two routing types. A nested route to this endpoint might look something like this:

While a flat route would look like this:

See the difference? Nested routes make use of nesting resource names and/or IDs, while flat routes limit the route endpoint to just one resource name and pass in the rest of the necessary information as URL parameters. So which one is better?

Well, nested routing looks prettier, I think we all agree there – but I prefer to use flat routing as I’m building out my API endpoints. Why, you might ask? Well, for two reasons:

It keeps things simpler. With flat routes, I only have one way to access exactly the data that I need. This becomes important when we start dealing with associative entities. For example, a comment can belong to either a post or a user. With nested resources, that means I have multiple endpoints that I can access comments with: /comments, /users/1/comments, or /posts/1/comments. With flat routing – I just have one: /comments.

It works better with client-side packages such as Ember Data, RESTangular, ngResource, etc. By default, these libraries like to use flat routes and are much easier to work with if you do keep the routes flat and just pass in the necessary data as URL params.

tl;dr – I like flat routes much better, and always use them over nested routes when building a JSON API.


CORS is short for Cross-Origin Resource Sharing, which is a mechanism that allows for resources to be accessed by domains that are outside of the host domain. By default, this is turned off, which means that if your API exists on another domain than where you’re requesting the data, then that request will be denied unless CORS is turned on for that domain. Note: This only affects client-side requests. Server-side requests or cURL will still work just fine, regardless of CORS.

Rails has a built-in way to configure CORS, and I used that for a bit, but it honestly got to be a pain to deal with after a while. Instead, I recommend you use the gem rack-cors to handle all of your CORS needs. With this gem installed, for example, in order to allow GET, POST, and OPTIONS requests for all domains, this is all you need to add into your config/application.rb:

See how easy that is! And with rack-cors’ nice DSL, you can see it’s really easy to configure CORS just like you want to.

No Views

If you followed along with this series and built a JSON API alongside while reading, then you may have noticed that there are no views or layouts. This is completely intentional, and the reason we don’t have those are because we don’t need them! All we’re doing in a JSON API is returning JSON – no HTML/CSS/JS necessary. This simplifies things immensely compared to a full rails web app. You can sort of substitute serializers for views however, since they modify our response, but they’re still significantly easier to deal with than full view templates.

No #edit or #new Controller Actions

If you’ve built a rails app before, you may be familiar with the 7 default controller actions that a resource has: Index, Show, New, Create, Edit, Update, and Delete. But in our API, we’re missing two of those – Edit and New. Why?

Edit and New, contrary to their names, actually both correspond to GET requests and are specifically triggered when you access a page (typically that has a form) which will eventually submit a POST or PUT request. You need this preliminary GET request to happen in order to provide any necessary data prior to submitting your POST or PUT request.

With a JSON API, you don’t have any web pages that you’re interacting with, so you don’t need to load anything prior to submitting your POST or PUT request – you just submit it. Because of that simplicity, you don’t need the Edit or New actions anymore, so we just remove them all together. See, using rails just as an API is simpler than using it as a full web application platform!

Filtering Resources

Earlier, I recommended using flat routes over nested routes, and part of that is because of how easy it allows you to filter resources based on your URL params. In order to filter our resources by any data attribute at all, all we need to do is change one line in our index actions – let’s do an example with our comments controller.

This is the original code for the index action:

And we’re just going to change the first line of the action to this:

And that’s it! Now, we can filter comments not only by post_id or user_id, but by any attribute that our strong parameters method whitelists – such as body. For example, any of these will work as expected:


Last but not least, I wanted to cover testing a JSON API. This could easily be a blog post on its own – or a series of blog posts, really – so I just want to give the gist of how to begin testing your API. There are multiple libraries that provide testing features for your rails apps – some of the common ones being:

  • TestUnit
  • Cucumber
  • Minitest
  • RSpec

My personal preference for a testing library is RSpec. When you test an API, a majority of your tests will mostly likely test the controller, but there are more categories of tests that you should write than just controller tests. I’ll list out the categories that I test for, with an example of each (examples given using RSpec):

Routing Tests

Routing tests should be written in order to verify that each of your individual request types end up making it to their intended controller action:

Request Tests

Request tests should be written in order to verify that your basic requests either respond successfully (200 status code) or unsuccessfully (400 status code) – due to good or bad authentication. These would need to be written with your specific API auth structure in mind, but here’s a simple example:

Model Tests

These are tests you may be familiar with, and are written in order to test out model methods, validations, scopes, and more things that are defined in the models. For our example, let’s assume that in order to successfully create a comment, it requires both a post_id and a user_id. To test for that, we could write a test like this:

Controller Tests

Controller tests are going to be where all of your business logic should be tested – and thus these are usually the most complex. At the very least, you should test for both successes and failures for all of your controller actions – and thus test out all necessary request types such as GET, POST, PUT/PATCH, and DELETE. If you have any additional logic – which you probably will – then you’ll want to build tests for those too. 100% code coverage is the goal – so if you add in new logic, make sure you build some tests for it! Because there are a ton of different controller tests you can build, I’m just gonna show you two simple tests – one for a GET request to the #show action, and one for a POST request to the #create action:

Like I said, these are pretty basic examples of how to write API tests, but you should always make sure that you do actually build tests for your project. Personally, I don’t practice TDD – I usually write my tests after I have written my actual logic, and then fix anything that came up, but regardless of what testing practices you follow, having your project supported by tests will make it much less brittle when updating and will be significantly easier for you and/or your team to manage in the future.

If your API has mailers, then you’ll want to write tests for those too. In addition to RSpec, I like to use the gems Factory Girl – to help spawn dummy objects easier – and Database Cleaner – to ensure that my testing environment stays clean between tests.


We covered a lot of various topics here, and this was kind of my concluding post to address the different things that might be important to know as you’re starting to build out your own JSON API using rails. This is officially the final post of this series, so I hope you enjoyed it! If you’re new to this series, then I highly recommend you begin back at the first post which talks about getting started on how to build a JSON API with rails.

Now that you have the skills – make sure you use them responsibly. Just like Captain Planet says, the power is yours!

Metaprogramming in Ruby: Part 2

Welcome back to the Metaprogramming in Ruby series! If you haven’t done so yet, you may want to review Metaprogramming Ruby: Part 1 in order to catch up to the content we’re going to talk about today. In that post, we discussed open classes, ruby’s object model & ancestors chain, defining methods dynamically, calling methods dynamically, and ghost methods. We’re going to finish up the metaprogramming talk in this post and show you some really powerful tools you can add to your ruby arsenal. Let’s begin.


We’re going to start our discussion over closures by addressing scope. There are 3 spots in ruby where scope will shift (these are properly dubbed Scope Gates):

  • Class definitions
  • Module definitions
  • Methods

That makes something like this impossible:

But with metaprogramming, we can bend scope to our will and make this happen. Before we do that though, we need to discuss the two ways that you can define a class in ruby.

Defining a Class – Statically

This is the way that we’re all familiar with:

There’s nothing new going on here. But did you know we can also define a class at runtime? If you’re unsure what I mean, check out this next example.

Defining a Class – Dynamically

This is an alternative way to define a class in Ruby. If you remember the object model we discussed in the previous post, you’ll recall that a class in ruby is also an object, and it has a class of Class. It was confusing to think about then, but here is an example of that. If we flip that phrase around, then that means if we instantiate an object out of class Class, then that object is also a class. That’s exactly what we’re doing here! We’re creating a class by calling – all at runtime!

This allows us to pass scope gates and access my_var inside of the class declaration. From there, you have two ways to define methods. You can define methods the classical way such as how we defined the foo method – but that’s still a scope gate. You can’t access my_var inside that method. If you want to access my_var inside of your method, you’ll need to dynamically define a method – just like how we defined the title method.

You can see a full example of this as we return to our previous discussion on scope:

This seemingly “scopeless” process is called a Flat Scope. Whether you use this concept or not is up to you, but ruby provides you with the tools to let you make that choice.

Blocks, Procs, and Lambdas

While they don’t necessarily fit under the bill of metaprogramming, blocks, procs, and lambdas are something that often aren’t fully understood by most developers, so I wanted to review them to clear up any inconsistencies in how they work. For starters, most things in Ruby are objects. Blocks are not. To pass them around, you use the & operator.

I defined a block in one scope, and passed it to a method using the & operator – where it was then yielded. Let’s move on now to the differences between procs and lambdas.

There are 2 main differences:

  • Lambdas throw an ArgumentError if the argument count doesn’t match when you call them. Procs do not.
  • Lambdas return in a local scope, whereas Procs return from the scope in which they were called.

That first difference isn’t too difficult to understand, but the second difference…not so much. Let’s do some examples to illustrate exactly how these two block types work.

Starting off with a lambda example:

This executes about how we would expect. In the lambda_example method, we define a lambda block which just accepts two arguments and multiplies them together. We then call that lambda with a 2 and 4, and multiply that result by 10. That gives us 80.

If we called the lambda with any more or less arguments than two, it would fail to execute and give us an ArgumentError.

Let’s move on to procs, where you can see a real difference:

Hold on there – this code is exactly the same. All we did was swap out lambda for proc – and now the response is 8?

Yes, it’s 8, and the reason why is because of how procs return in the scope they were called in. Whenever you return inside of a proc, the return statement doesn’t execute in the scope inside of that block – it executes where you initially call the proc, which in this example is on line 3. Therefore, whenever we call the proc and return the value 8, the proc forces the entire method to return with that value instead of continuing on with the rest of the code. In fact, line 4 never even gets called because the method has already returned at that point because of the proc.

Other than the fact that you can call the proc with as many arguments as you want and you won’t get an ArgumentError, this is the major sneaky difference between procs and lambdas.

To summarize closures, scope usually works as expected, but once you know how to manipulate it – you can do powerful things. Just be sure you know what you’re doing. Now let’s move on to evals.


In ruby, there are 3 main types of evals:

  1. Instance Eval
  2. Class Eval
  3. Eval

Instance Eval

instance_eval is a method we can use to bust open and possibly manipulate an object’s internals. Here’s an example.

In our obj object, v is a private instance variable. We can’t call obj.v or else we’ll get an error – but we can use instance_eval to not only access and modify private instance variables, but also to set it to a variable that is in scope outside of that block – because blocks aren’t scope gates. While this example is pretty simple, I hope you can see how powerful this can be. But again – be careful. While you can access just about any attribute on an object this way, there’s usually a reason why instance variables or methods may be private.

Class Eval

Even with open classes and dynamic class creation, we couldn’t update a class within another closure (like a method). We also couldn’t get into a class based on a variable – we had to use the constant. The method class_eval allows us to do all of these things.

For starters, class_eval lets us break into an existing class at any point in time – like in a method declaration as you see here. Secondly, and most importantly, class_eval allows us to open up a class based on a variable instead of the constant for that class. In this example we pass in the constant String into the method and assign it a parameter variable. We then open up the String class based on the variable it’s set to. We couldn’t do that through open-class code such as class a_string because that would try to open up a class called a_string and not the value held by the variable.

We also bypass scope gates when we use class_eval and thus use a flat scope, similar to dynamic class declaration like we discussed earlier.


We can now move on to the final eval function – just plain eval. This function is drop-dead simple to understand, but it’s extremely powerful – and very dangerous. All eval does is accept a single argument – a string – and run it as ruby code at the same point in runtime as when the eval method is called. Here’s a very basic example where we just use eval to append a value to an array:

We never actually run the ruby code ourselves to append the element variable to array, we just tell eval to do it by passing it the ruby code as a string. We don’t gain any benefit here by using eval, but take a look at this deeper example to begin to see powerful it can be:

If you’re unfamiliar with the syntax after the eval method call, that’s just a multiline string in ruby. In this example, we have 2 local variables in scope when we call eval, and we are using those variables to open up a class, create an attr_accessor, and write a constructor. But we’re doing it all by embedding variables into our multiline string. This executes as valid ruby at runtime, but this would never, ever be valid ruby code that we could write without the use of eval. Starting to see the power?

Good. Now we can talk about how dangerous eval is. Let’s say you want to create a program that allows you to test all the Array methods in a playground-type scenario. All you have to do is call a method with a string argument, and that string argument represents the method that you want to call on an array – just to see what the return value would be. And let’s say you want to expose this program to the world because it’s been very helpful for you, so you put it up on a website.

Here’s the code:

Nothing bad happened, in fact, it returned exactly what I wanted it to – the index of value c, which is 2. But watch what happens if we call explore_array with a different argument:

Woah, what is all that? Yup, by looking at the code, you guessed it. That’s a listing of all the files and subdirectories inside of the main directory that’s running this app. That’s bad – really bad. Using eval often times makes you susceptible to code injection, which is why you have to absolutely be sure you know what you’re doing when you use it.

There are plenty of posts out there about how to keep eval as safe as possible, but the moral of the metaprogramming story keeps coming back: with great power comes great responsibility.

Final Thoughts

We’ve reached the end of our journey over Metaprogramming in Ruby. As I mentioned in the first post, we didn’t cover Singleton Methods or Eigenclasses, but we reviewed just about everything else to some degree.

Metaprogramming is an advanced topic in any language, and that certainly applies to ruby. As you’ve seen so far, metaprograming allows you to write some really powerful code – but that code can be dangerous too. Whenever you use metaprogramming you automatically increase the code complexity of your project – it’s much more difficult to read and understand what’s going on. You also may run into other problems as we saw with open classes and the eval method, and those problems are very difficult to debug.

In the words of Matz (the creator of ruby), “ruby treats you like a grown up.” You are given all the tools to write powerful code – it’s just up to you to choose if they’re right for you. Now go forth, fellow ruby developer, and live your destiny. Whether you use these techniques or not – at the very least you’re now more aware of how some of the neat internals of ruby work.

Metaprogramming in Ruby: Part 1

What is Metaprogramming?

Metaprogramming is code that writes code for you. But isn’t that what code generators do, like the rails gem, or yeoman? Or even bytecode compilers?

Yes, but metaprogramming typically refers to something else in RubyMetaprogramming in ruby refers to code that writes code for you dynamically. At runtime. Ruby is a prime language for dynamic metaprogramming because it employs type introspection and is intensely reflective – to a higher degree than just about any other language out there. This allows you to do some really cool things like add in a ton of functionality with very few lines of code, but there’s a catch; you can jack up a lot of things too at the same time and/or end up with practically unreadable code if you’re not careful. The moral of the story is, in Uncle Ben’s words:

“With great power comes great responsibility.”

When Uncle Ben said this, he wasn’t talking about any real life things. He was talking about Metaprogramming.

Let’s Get Started

Let’s say you want to create a method that will accept a string and strip everything out except for alphanumeric characters:

That gets the job done, but it’s not very object oriented. Let’s fix that.

Open Classes

In ruby, you can break open any existing class and add to it just like this – even if you weren’t the one who originally declared it (i.e. the String class here is a ruby default class). Cool stuff. Nuff said. However, there’s a problem with open classes. Check this code out.

We wrote an Array#replace method that takes in 2 arguments, the first of which is the value you want to replace in the array, and the second of which you want to replace the first with.

This code works just fine. Why is this a problem? The Array#replace method already exists, and it swaps out the entire array with another array that you provide as an arg. We just overwrote that method, and that’s bad. We probably didn’t mean to do that.

This process of editing classes in ruby is called Monkeypatching. It’s not bad by any means, but you definitely need to be sure you know what you’re doing.

Ruby’s Object Model

Before we get further, we need to talk about how Ruby’s object model works.

Image from Metaprogramming Ruby – by Paolo Perrotta

This may look like a confusing diagram, but it neatly lays out how objects, classes, and modules are related in ruby. There are 3 key things of note here:

  • Instantiated objects (obj1, obj2, obj3) have a class of MyClass
  • MyClass has a class of Class (This mean that classes are also objects in Ruby. That’s tough to wrap your head around, I know)
  • While MyClass has a class of Class, it inherits from Object

We’ll reference this again later in Part 2. For now, let’s move on to the Ancestors Chain.

Ancestors Chain

This diagram is a little bit easier to understand, and deals solely with inheritance and module inclusion.

Image from Metaprogramming Ruby – by Paolo Perrotta

When you call a method, Ruby goes right into the class of the receiver and then up the ancestors chain, until it either finds the method or reaches the end of the chain. In this diagram, an object b is instantiated from class Book. Book has 2 modules included: Printable and Document. Book inherits from class Object, which is the class that nearly everything inherits from in Ruby. Object includes a module called Kernel. And finally, Object inherits from BasicObject – the absolute parent of every object in Ruby.

Now that we’ve got these 2 very important topics down a little – Ruby’s Object Model and the Ancestors Chain – we can get back to some code.


In Ruby, you can dynamically create methods and dynamically call methods. And call methods that don’t even exist – without throwing an error.

Methods Part 1: Dynamically Defining Methods

Why would you want to dynamically define methods? Maybe to reduce code duplication, or to add cool functionality. ActiveRecord (the default ORM tool for Rails projects) uses it heavily. Check this example out.

If you’re familiar with ActiveRecord, then this looks like nothing out of the ordinary. Even though we don’t define the title attribute in the Book class, we assume that Book is an ORM wrapper around a Book database table, and that title is an attribute in that table. Thus, we return the title column for that particular database row that b represents.

Normally, calling title on this class should error with a NoMethodError – but ActiveRecord dynamically adds methods just like we’re about to do. The ActiveRecord code base is a prime example of how you can use metaprogramming to the max.

Let’s try this out and create our own methods:

See the duplication? Let’s fix that with metaprogramming.

What we’re doing here is dynamically defining the methods foo, baz, and bar, and then we can call them. The Module#define_method method is something that I personally use a lot, and it’s so, so helpful. Here’s an example of how I used it in a gem I wrote.

You can see how much code we saved here – especially if we were writing real methods. BUT – is it worth the added code complexity? That’s your call.

Methods Part 2: Dynamically Calling Methods

Dynamically calling methods or attributes is a form of reflection, and is something many languages can do. Here’s an example of how to call a method by either the string or symbol name of that method in ruby:

The Object#send method is how we can dynamically call methods. Here I’m spinning through the numbers 1 through 5, and calling a method whose name is dependent on the current variable value. Clutch.

Because every object in Ruby inherits from Object, you can also call send as a method on any object to access one of its other methods or attributes – like this:

The power with send comes when you want to call a method based on some in-scope situation – often times based off of a variable value. Object#send also allows you to call private functions – so be careful if you’re not meaning to do that. Use Object#public_send if you can – it does the same thing, but is restricted from accessing private methods and attributes.

Methods Part 3: Ghost Methods

What happens if we try to execute this code?

We would get a NoMethodError, because Book doesn’t know how to handle the method read. But it doesn’t have to be that way. Let’s explore method_missing.

BasicObject#method_missing provides you an option to build a handler that will automatically get called in the event of a NoMethodError – but before that error ever happens. You are then given as parameters the method name that you tried to call, its arguments, and its block. From there, you can do anything you want.

While this looks really cool, be hesitant to use it unless you have a valid reason, because:

  • It takes extra time to hit the method_missing handler because you traverse the Ancestor Chain
  • If you’re not careful, you’ll swallow actual errors uninentionally. User super to handle any unintended errors, which will then call the default method_missing handler.

That’s all we’re going to cover in this first part. We reviewed Open Classes, Ruby’s Object Model, The Ancestors Chain, Dynamic Method Declarations, Dynamic Method Calling, and Ghost Methods, but there’s even more in store for Part 2 where we’ll cover Scopes, Dynamically Defining Classes, Closures (Blocks, Procs, and Lambdas), Various Evals (instance_eval, class_eval, and eval), and Writing a Multi-Purpose Module.

We won’t be covering Singleton Methods and Eigenclasses however. Those concepts cover a good chunk of metaprogramming in Ruby, but they are in my opinion the most confusing concepts to master and I’ve never ran into a situation where using them would have made my code much better. So I chose to avoid them altogether, but if you’re interested in learning more there are tons of articles about them.

Thanks for sticking around until the end – and stay on the lookout for Metaprogramming in Ruby: Part 2!

Debugging a Simple Web Server

This is the second part of a short series on how to build a web server using Sinatra. In the previous post we discussed the initial buildout of a simple Sinatra web server, so to make sure we’re all on the same page, you may want to start there if you haven’t read it already. In this post we’ll be reviewing how you can easily debug that web server.

Debugging Tools

We’re going to talk about 3 debugging tools you can use in order to fully test your web server:

What we won’t be covering however are conventional ruby testing libraries such as TestUnit, RSpec, Cucumber, etc. There are a lot of other posts about how to use those tools, and we’re going to focus specifically on manual testing.

For starters, you can test your web server just by spinning it up (assuming your app is set up like the previous post):

You can now open up your browser and navigate to http://localhost:4567 to see your web server. But that will only get you so far since you can really only issue GET requests that way – plus it’s slow and tedious. We can do a lot better.

Issuing Requests with cURL

Chances are that you’ve heard of cURL and may use it regularly, but if you haven’t, it’s a super neat tool that allows you to issue HTTP requests from the command line. While browsers do have the capability to issue any type of HTTP request, as a user you’re mostly limited to just GET requests. CURL can help with that. To issue a GET request using cURL, run:

And you’ll get back an HTML response saying that Sinatra doesn’t know how to handle that route. You’ll also see in your server logs that a GET request was made:

Let’s check out some other request types:

Now you have the full capabilities to issue any request you want to your web server without ever leaving your command line – assuming you like the command line. This ends up being much faster than manual requests through your browser.

Breakpoints with Pry

This next debugging tool isn’t specific to Sinatra – you can apply it to any ruby development you do. There’s a good chance you’ve heard about it too, and perhaps even use it. I’m talking about the gem called pry.

Pry is an immensely handy tool that any ruby developer should have in his/her arsenal. It allows you to halt the runtime of any script and expose the scope of the currently executing line in an interactive REPL (Read-Eval-Print-Loop) shell that allows you to debug any issues you have. This is very similar to how a lot of IDE’s work (and even Chrome Dev Tools when debugging javascript) in that you set your breakpoints at a certain line and your code stops executing there to allow you to debug the current state of your program. I like pry because regardless of what tools you use to execute your script (IDE, command line, etc.), pry still works the exact same way – it halts your script and allows you to get deep in debugging. It’s environment agnostic!

To install pry, add the gem to your Gemfile:

You can wrap it up in a group: :development block if you’d like too, since you’ll never use pry in production. Now install it:

Include it in your main script and give it a whirl.

Now restart your Sinatra server and let’s check out pry in action. If you make a GET request to your root endpoint, your request will look like it hangs forever, but really your program is just waiting on you to finish debugging! Let’s go ahead and make that GET request to our root endpoint via curl:

And then if you look at your server logs, you’ll notice it has turned into a REPL you can play with!

This is an interactive shell just like IRB and works the exact same way, except that you’re now in the scope in which your program stopped, so you have access to all the variables, objects, classes, methods, etc. that you normally would at that point in runtime.

Whenever you’re done, just type exit to resume normal runtime. There’s a ton more you can do with pry, but I’ll let you explore the docs to see what all it has to offer.

Expose Your Web Server with Ngrok

Lastly, I want to talk about ngrok. Ngrok is a wonderful dev tool that allows you to securely expose any of your localhost ports to a publicly accessible URL. What that means is that you can use ngrok to create a public URL that maps to your server on localhost:4567, and now anyone can have access to it!

Before we get into ngrok too much, let me explain why this is so nice. Because the purpose of this web server is to handle remotely small tasks, there’s a good chance that you don’t want to put a ton of time into building testing frameworks or scaffolding out a needless client-side app to talk to your web server. There’s also a good chance that you’re integrating this web server with something that already exists, such as a third-party API or external database – and those services can’t see your localhost. But they can sure see a public URL, which is what ngrok gives you.

Ngrok is available for download as a binary file at the main site, but it’s also installable as a global npm module, so we’ll install it that way:

To expose the port that our web server is running on, we just run:

Ngrok will then take over our terminal pane and show us the public URLs it created:


We can now access the url in the exact same way as localhost:4567 – and all external services can see that URL. You could even navigate to another website and use chrome dev tools to issue an HTTP request via AJAX to our web server, and it will all work. As you can see in the picture above, ngrok also creates a URL that uses HTTP over SSL, so you can even integrate it with fully secure sites too. Just like normal server logs, ngrok tracks and displays which requests were made to which resources:

ngrok logs

Pretty neat, huh? You may not always need ngrok for your web server, but it’s a great tool to have in your dev toolbox for any project. Exposing your localhost to a public URL for testing purposes is a game changer when you just quickly want to see how things might work in production.

The last neat thing about ngrok is that it still allows you to use your other debug tools too, like pry. Earlier we placed a breakpoint in our GET handler for our root endpoint using pry, which allowed us to stop the runtime of our program to debug it. Because ngrok merely maps our localhost ports to public URLs, all the code is exactly the same and updates in real-time (no need to restart ngrok, ever), so if you make a GET request to the root endpoint of our ngrok URL, the interactive REPL through pry will still get triggered in our normal Sinatra server logs, just as if we made a request to localhost!


Though there’s many more ways you can test your web server, these are some of my favorite tools that I’ve used lately. Sinatra is a really powerful mini web framework if you’re familiar with ruby, and if you don’t know much about it then feel free to check out my first post on how to build a web server using Sinatra.

Today we went over:

  • using cURL to issue requests quickly
  • using pry to debug our ruby scripts through breakpoints and REPLs
  • using ngrok to expose our localhost ports to a public URL

None of these tools are specific to Sinatra or even mini web-servers in general, and you can therefore use them in a lot of different situations – which I recommend you do. Regardless of which tools you do use to test your web servers, I hope I provided you with at least a couple more ideas on how to manually debug your web projects.

Happy building!