Testing javascript in a dockerized rails application with rspec-rails

The other day I wanted to add support for tests of javascript functionality in a (dockerized) rails application using rspec-rails.

Since rails 5.1 includes system tests with niceties like automatically taking a screenshot on failed tests, I hoped for a way to benefit from this
features without changing to another test framework. Lucky me – only recently the authors of rspec-rails added support for so-called system specs. There is not much documentation so far (but there are a lot of useful information in the corresponding bug report #1838 and a friendly guy named Thomas Walpole (@twalpole) is helpfully answering to questions in that issue.

To make things a little bit more complicated: the application in question is usually running in a docker container and thus the tests of the application are also run in a docker container. I didn’t want to change this, so here is what it took me to get this running.


Let’s see what we want to achieve exactly: From a technical point of view, we will have the application under test (AUT) and the tests in one container (let’s call it: web) and we will need another container running a javascript-capable browser (let’s call it browser).

Thus we need the tests to drive a remote running browser (at least when running in a docker environment) which needs to access the application under a different address than usually. Namely an address reachable by the chrome-container, since it will not be reachable via as is (rightfully) assumed by default.

If we want Warden authentication stubbing to work (as we do, since our application uses Devise) and transactional fixtures as well (e.g. rails handling database cleanup between tests without database_cleaner gem) we also need to ensure that the application server is being started by the tests and the tests are actually run against that server. Otherwise we might run into problems.

Getting the containers ready

Assuming you already have a container setup (and are using docker-compose like we do) there is not that much to change on the docker front. Basically you need to add a new service called chrome and point it to an appropriate image and add a link to it in your existing web-container.

I’ve decided to use standalone-chrome for the browser part, for which there are docker images provided by the selenium project (they also have images for other browsers). Kudos for that.

The link ensures that the chrome instance is available before we run the tests and that the the web-container is able to resolve the name of this container. Unfortunately this is not true for the other way round, so we need some magic in our test code to find out the ip-address of the web-container. More to this later.

Other than that, you probably want to configure a volume for you to be able to access the screenshots, which get saved to tmp/screenshots
in the application directory.

Preparing the application for running system tests

There is a bit more to do on the application side. The steps are roughly:

  1. Add necessary depends / version constraints
  2. Register a driver for the remote chrome
  3. Configure capybara to use the appropriate host for your tests (and your configured driver)
  4. Add actual tests with type: :system and js: true

Let’s walk them through.

Add necessary depends

What we need is the following:

  • rspec-rails version >= 3.7.1
  • rails itself > 5.1.4 (unreleased at time of writing)
  • capybara and capybara-selenium

The required features are already part of 3.7.0, but this version is the version I used and it contains a bugfix, which may or may not be relevant.

One comment about the rails version: for the tests to properly work it’s viable to have puma use certain settings. In rails 5.1.4 (the version released, at time of writing this) uses the settings from config/puma.rb which most likely collides with the necessary settings. You can ensure these settings yourself or use rails from branch 5-1-stable which includes this change. I decided for the latter and pinned my Gemfile to the then current commit.

Register a driver for the remote chrome

To register the required driver, you’ll have to add some lines to your rails_helper.rb:


Note that I added those lines conditionally (since I still want to be able to use a local chrome via chromedriver) if an environment variable DOCKER is set. We defined that environment variable in our Dockerfile and thus you might need to adapt this to your case.

Also note that the selenium_url is hard-coded. You could very well take a different approach, e.g. using an externally specified SELENIUM_URL, but ultimately the requirement is that the driver needs to know that the chrome instance is running on host chrome, port 4444 (the containers default).

Configure capybara to use the appropriate host and driver

The next step is to ensure that javascript-requiring system tests are actually run with the given driver and use the right host. To achieve that we need to add a before-hook to the corresponding tests … or we can configure rspec accordingly to always include such a hook by modifying the rspec-configuration in rails_helper.rb like this:


Note the part with the ip-address: it tries to find an IPv4 private address for the web-container (the container running the tests) to ensure the chrome-container uses this address to access the application. The Capybara.server_port is important here, since it will correspond to the puma instance launched by the tests.

That heuristic (first ipv4 private address) works for us at the moment, but it might not work for you. It is basically a workaround to the fact that I couldn’t get web resolvable for the chrome container – which may be fixable on the docker side, but I was to lazy to further investigate that.

If you change it: Just make sure the host! method uses an URI pointing to an address of the web-container that is reachable to the chrome-container.

Define tests with type: :system and js: true

Last but certainly not least, you need actual tests of the required type and with or without js: true. This can be achieved by creating tests files starting like this:

Since the new rspec-style system tests are based around the feature-specs which used to be around previously, the rest of the tests is exactly like it is described for feature specs.

Run the tests

To run the tests a commandline like the following should do:

docker-compose run web rspec

It won’t make a big noise about running the tests against chrome, unless something fails. In that case you’ll see a message telling you where the screenshot has been placed.


Below I add some hints about problems I’ve seen during configuring that:

Test failing, screenshot shows login screen

In that case puma might be configured wrongly or you are not using transactional fixtures. See the hints above about the rails version to use which also includes some pointers to helpful explanations.

Note that rspec-rails by default does not output the puma startup output as it clutters the tests. For debugging purposes it might be helpful to change that by adding the following line to your tests:

Error message: „Unable to find chromedriver … “

This indicates that your driver is not configured properly, because the default for system tests is to be driven_by selenium, which tries to spawn an own chrome instance and is suitable for non-dockerized tests.

Check if your tests are marked as js: true (if you followed the instructions above) and that you properly added the before-hook to your rspec-configuration.

Collisions with VCR

If you happen to have tests that make use of the vcr gem you might see it complaining about not knowing what to do with the requests between the driver and the chrome instance. You can fix this, by telling VCR to ignore that requests, by adding a line where you configured VCR:

Sharing code between puppet providers

So you’ve written that custom puppet type for something and start working on another puppet type in the same module. What if you needed to share some code between this types? Is there a way of code-reuse that works with the plugin sync mechanism?

Yes, there is.

Puppet even has two possible ways of sharing code between types.

Option #1: a shared base provider

A provider in puppet is basically a class associated with a certain (puppet) type and there can be a lot providers for a single type (just look at the package provider!). It seems quiet natural, that it’s possible to define a parent class for those providers. So natural, that even the official puppet documentation writes about it.

Option #2: Shared libraries

The second option is a shared library, shipped in a certain namespace in the lib directory of the module, whereas the idea is mostly sketched in the feature ticket #14149. Basically one defines a class in the special Puppetx namespace, using the author- and module-name in the class name, in order to avoid conflicts with other modules.

This example would be saved to


in your module’s folder and be included in your provider with something along the following:

Compatibility with Puppet 4:

In puppet 4 the name of the namespace has changed slightly. It’s now called ‚PuppetX‘ instead of ‚Puppetx‘ and is stored in a file ‚puppet_x.rb‘, which means that the require and the module name itself need to be changed:

For backward compatibility with puppet 3 you could instead add something like this, according to my co-worker mxey, who knows way more about ruby then I do:

Apart from this you’d need to change the require to be conditional on the puppet-version and refer to the module by the aliased version (which is left as an exercise for the reader ;))

PHP and big numbers

One would expect, that one of the most used script languages of the world would be ableto do proper comparisons of numbers, even big numbers, right?Well, PHP is not such a language, at least not on 32bit systems.Given a script like this:


$t1 = „1244431010010381771“;

$t2 = „1244431010010381772“;

if ($t1 == $t2) {

    print „equaln“;



A current PHP version will output:

schoenfeld@homer ~ % php5 test.php

It will do the right thing on 64bit systems (not claiming that the numbers are equal).Interesting enough: An equal-type-equality check (see my article from a few years ago) will not tell that the two numbers are equal.

Things that make you a good programmer

If you ever wondered if you are a good programmer (not), you might think about the following points:

1. Repeat yourself. How else would you keep yourself busy, if your customer has new requirements?

2. Re-Using code is for people who take the other side of the street if a big dog walks along. No risk, no fun. How else would you find out that the common idiom you use is really the way the job has to be done?

3. If you have a coding convention (e.g. how code has to be indented): Just ignore it. Its a good thing to have editors go crazy, when trying to automatically detect the indenting of a source file. By always confusing the editor you keep up the fun of the people, who try to change your code. It would be too boring for them, to simply edit the file, without the quiz which quoting applies to your code. Extra points for those who additionaly (to mixing tabs, spaces, 4 and 8 space, expandtab and no expandtab) write a vim modeline into their file that – by guarantee – does not match the indenting of the file.

4. If you can complicate things: Do it. Numeric indices in arrays can be exchanged by arbitrary strings, this makes your code more interesting. Especially if you have to move elements in that array. And complicated codes makes people, who don’t know it, think that you are a good programmer. One step nearer to your goal, isn’t it?

5. When working with different classes invent a system that auto-loads the classes you need. Don’t document it, that would be a risk for your job. Its not neccessary anyway, because good programmers like the riddle and finding out what gets called, where and why, is a simple but entertaining riddle. After all documenting is a bad idea as well. Especially if you make your system open source, because with a good documentation, it might be to easy for competitors to use your code to make money.

6. If you find ways to do extra function calls: Do it. It gives you the chance to refactor your code, if the customer notices that it is too slow. Great opportunity, hu?

(But, seriously: Don’t hear on me. Its just a cynical way to express my feelings.)

Cool PHP-Code.

Did you know that in PHP you can write something like that:

$test = „foobar“;
$test = sTr_RePlace(„bar“, „baz“, $test);
$x = sPrinTf(„%s is strange.“, $test);

pRint $x . „n“;
eCho „foo“;

What makes me frighten is that this actually used, e.g. by this code snippet which exists (in a similar form) in an unnamed PHP-project:

echo sPrintF(_(„Bla bla bla: %s“), $bla))

And yes, they do use echo to output the result of sprintf.

Update: So i got this great comment. The commentator wants to point out that the senseless use of „echo sprintf“ is because of gettext. He says „That’s simply the way you use gettext.“ But this is simply not true. The difference between printf and sprintf is that the first one outputs the string, while the second one returns it. That means that in the above example printf could be used (instead of sprintf) without a useless echo call in front of it. The reason for using sprintf (and eventually the reason because you find it in a lot of applications using gettext) is that you can use it to fill a variable with the translated string or to use the string in-place. A common use-case for this is to handle a translated string to a template engine, for example.

PHP and the great „===“ operator

Lets suppose you’ve got an array with numeric indices:

[0] => ‚bla‘
[1] => ‚blub‘

Now you want to do something if the element ‚bla‘ is found in that array.
Well, you know that PHP has a function array_search, which returns the key(s) of the found values. Lets say, you write something like this:

if(array_search($array, ‚bla‘)) { do_something }

Would you expect that do_something would actually do something?
If yes: You are wrong.
If no: Great, you’ve understand some parts of PHP – insanitygoodness..

Actually, if ‚bla‘ wouldn’t have the index 0 it would work, because 1, 2, 3, 4, etc. is TRUE. But unfortunately PHP has some sort of implicit casting, which makes 0 behave like a FALSE, depending on the context. So following this, the if() works for all elements except 0.

You might be tempted to write

if(array_search… != FALSE)

But this wouldn’t help you, because 0 would still evaluate to FALSE, leading to if(FALSE != FALSE) which is (hopefully obviously) never true.

A PHP beginner (or even an intermediate, if he never stumbled across this case) might ask:
Whats the solution for this dilemma?

Luckily the PHP documentation is great. It tells you about this. And additionally PHP has got this great operator (===), which causes people to ask „WTF?“ when they hear about it for the first time. Additionaly to comparing the values to each other, they also check the type of the variable. This leads to the wanted result because 0 is an integer, while FALSE is Boolean. So the solution for our problem looks like this:

if (array_search($array, ‚bar‘) !== FALSE) {

Isn’t this great?

To the rescue, git is here!

Consider the following scenario:

You work on a project for a customer that is handled in a Subversion Repository.
The work you get comes in form of some tickets. Tickets may affect different parts
of the project, but they could also affect the same parts of the project. Testing of your work
is done by a different person. For some reasons its wanted that the fix for each ticket
is committed in only one commit in the subversion repository and only after it has been
tested. Commits for two tickets changing the same files should not be mixed.

Now: How do you avoid a mess when working with several patches that possibly affect the same files?

The answer is: git with git-svn. Currently my workflow looks the following:

  1. Create a branch for each ticket
  2. Make my changes for the ticket in this branch
  3. Create a patch from the changes in this branch and supply the tester with it
  4. Wait for feedback and eventually repeat steps 2-4
  5. When testing is finished, run git rebase -i master in the branch, now squash each commit into one commit, build a proper commit message from the template git provides.
  6. Switch to master branch and merge the changes from the branch.
  7. Rebase master against the latest svn (git svn rebase) and dcommit

That workflow works good for me. It gives me the possibility to do micro-commits as much as I want and still only commit a well-defined, well-tested commit into the projects SVN.
Just some minor drawbacks I haven’t yet solved (you know, git /can/ do everything, but that is actually a problem in itself):

  • Its a bit annoying that I need to use git rebase -i and change n-1 lines for a number of n commits, so that they are squashed. It would be handy to say: Squash all commits, that happened in this branch, into one.
  • Creating the diff for the tester requires me to do git log, search for the last commit before my first commit in that branch and use it with git diff to create a patch. I had a quick look at git-rev-parse and felt that is overwhelming complex to do find out how to do it better. To many possibilites. git is complex.

For now I cannot tell how good merging works, as soon as conflicts arise. But I guess git can do well, although there is the possibility that it could get complex again.

Nevertheless, its probably not a credit to git, but instead to DVCSes in common, but anyway. I like it.