Testing javascript in a dockerized rails application with rspec-rails

The other day I wanted to add support for tests of javascript functionality in a (dockerized) rails application using rspec-rails.

Since rails 5.1 includes system tests with niceties like automatically taking a screenshot on failed tests, I hoped for a way to benefit from this
features without changing to another test framework. Lucky me – only recently the authors of rspec-rails added support for so-called system specs. There is not much documentation so far (but there are a lot of useful information in the corresponding bug report #1838 and a friendly guy named Thomas Walpole (@twalpole) is helpfully answering to questions in that issue.

To make things a little bit more complicated: the application in question is usually running in a docker container and thus the tests of the application are also run in a docker container. I didn’t want to change this, so here is what it took me to get this running.

Overview

Let’s see what we want to achieve exactly: From a technical point of view, we will have the application under test (AUT) and the tests in one container (let’s call it: web) and we will need another container running a javascript-capable browser (let’s call it browser).

Thus we need the tests to drive a remote running browser (at least when running in a docker environment) which needs to access the application under a different address than usually. Namely an address reachable by the chrome-container, since it will not be reachable via 127.0.0.1 as is (rightfully) assumed by default.

If we want Warden authentication stubbing to work (as we do, since our application uses Devise) and transactional fixtures as well (e.g. rails handling database cleanup between tests without database_cleaner gem) we also need to ensure that the application server is being started by the tests and the tests are actually run against that server. Otherwise we might run into problems.

Getting the containers ready

Assuming you already have a container setup (and are using docker-compose like we do) there is not that much to change on the docker front. Basically you need to add a new service called chrome and point it to an appropriate image and add a link to it in your existing web-container.

I’ve decided to use standalone-chrome for the browser part, for which there are docker images provided by the selenium project (they also have images for other browsers). Kudos for that.

The link ensures that the chrome instance is available before we run the tests and that the the web-container is able to resolve the name of this container. Unfortunately this is not true for the other way round, so we need some magic in our test code to find out the ip-address of the web-container. More to this later.

Other than that, you probably want to configure a volume for you to be able to access the screenshots, which get saved to tmp/screenshots
in the application directory.

Preparing the application for running system tests

There is a bit more to do on the application side. The steps are roughly:

  1. Add necessary depends / version constraints
  2. Register a driver for the remote chrome
  3. Configure capybara to use the appropriate host for your tests (and your configured driver)
  4. Add actual tests with type: :system and js: true

Let’s walk them through.

Add necessary depends

What we need is the following:

  • rspec-rails version >= 3.7.1
  • rails itself > 5.1.4 (unreleased at time of writing)
  • capybara and capybara-selenium

The required features are already part of 3.7.0, but this version is the version I used and it contains a bugfix, which may or may not be relevant.

One comment about the rails version: for the tests to properly work it’s viable to have puma use certain settings. In rails 5.1.4 (the version released, at time of writing this) uses the settings from config/puma.rb which most likely collides with the necessary settings. You can ensure these settings yourself or use rails from branch 5-1-stable which includes this change. I decided for the latter and pinned my Gemfile to the then current commit.

Register a driver for the remote chrome

To register the required driver, you’ll have to add some lines to your rails_helper.rb:

 

Note that I added those lines conditionally (since I still want to be able to use a local chrome via chromedriver) if an environment variable DOCKER is set. We defined that environment variable in our Dockerfile and thus you might need to adapt this to your case.

Also note that the selenium_url is hard-coded. You could very well take a different approach, e.g. using an externally specified SELENIUM_URL, but ultimately the requirement is that the driver needs to know that the chrome instance is running on host chrome, port 4444 (the containers default).

Configure capybara to use the appropriate host and driver

The next step is to ensure that javascript-requiring system tests are actually run with the given driver and use the right host. To achieve that we need to add a before-hook to the corresponding tests … or we can configure rspec accordingly to always include such a hook by modifying the rspec-configuration in rails_helper.rb like this:

 

Note the part with the ip-address: it tries to find an IPv4 private address for the web-container (the container running the tests) to ensure the chrome-container uses this address to access the application. The Capybara.server_port is important here, since it will correspond to the puma instance launched by the tests.

That heuristic (first ipv4 private address) works for us at the moment, but it might not work for you. It is basically a workaround to the fact that I couldn’t get web resolvable for the chrome container – which may be fixable on the docker side, but I was to lazy to further investigate that.

If you change it: Just make sure the host! method uses an URI pointing to an address of the web-container that is reachable to the chrome-container.

Define tests with type: :system and js: true

Last but certainly not least, you need actual tests of the required type and with or without js: true. This can be achieved by creating tests files starting like this:

Since the new rspec-style system tests are based around the feature-specs which used to be around previously, the rest of the tests is exactly like it is described for feature specs.

Run the tests

To run the tests a commandline like the following should do:

docker-compose run web rspec

It won’t make a big noise about running the tests against chrome, unless something fails. In that case you’ll see a message telling you where the screenshot has been placed.

Troubleshooting

Below I add some hints about problems I’ve seen during configuring that:

Test failing, screenshot shows login screen

In that case puma might be configured wrongly or you are not using transactional fixtures. See the hints above about the rails version to use which also includes some pointers to helpful explanations.

Note that rspec-rails by default does not output the puma startup output as it clutters the tests. For debugging purposes it might be helpful to change that by adding the following line to your tests:

Error message: „Unable to find chromedriver … “

This indicates that your driver is not configured properly, because the default for system tests is to be driven_by selenium, which tries to spawn an own chrome instance and is suitable for non-dockerized tests.

Check if your tests are marked as js: true (if you followed the instructions above) and that you properly added the before-hook to your rspec-configuration.

Collisions with VCR

If you happen to have tests that make use of the vcr gem you might see it complaining about not knowing what to do with the requests between the driver and the chrome instance. You can fix this, by telling VCR to ignore that requests, by adding a line where you configured VCR:

Ansible: Indenting in Templates

When using ansible to configure systems and services, templates can reach a significant complexity.  Proper indenting can help to improve the readability of the templates, which is very important for further maintenance.

Unfortunately the default settings for the jinja2 template engine in ansible do enable trim_blocks only, while a combination with lstrip_blocks would be better. But here comes the good news:

It’s possible to enable that setting on a per-template base. The secret is to add a special comment to the very first line of a template:

This setting does the following: If enabled, leading spaces and tabs „are stripped from the start of a line to a block“.

So a resulting template could look like this:

Unfortunately (or fortunately, if you want to see it this way 😉 this does not strip leading spaces and tabs where the indentation is followed by pure text, e.g. the whitespaces in line 4 are preserved. So as a matter of fact, if you care for the indentation in the resulting target file, you need to indent those lines  according to the indentation wanted in the target file instead, like it is done in the example.

In less simple cases, with more deep nesting, this may seem odd, but hey: it’s the best compromise between a good, readable template and a consistently indented output file.

aptituz/ssh 2.3.2 published

I’ve just uploaded an update version of  my puppet ssh module to the forge.

The module aims at being a generic module to manage of ssh server and clients, including key generation and known_hosts management. It provides a mechanism to generate and deploy ssh keys without the need of storeconfig or PuppetDB but a server-side cache instead. This is neat, if you want to remain ssh keys during a reprovisioning of a host.

Updates

The update is mostly to push out some patches I’ve received from contributors via pull requests in the last few months. It adds:

  • Support for the AllowUsers, AllowGroups and DenyUsers aswell as DenyGroups parameters in the default sshd_config template. Thanks to cachaldora  for the patches.
  • Support for multiple ports in the default sshd template. Thanks to Arnd Hannemann for that patch.
  • Fixes in the template for it to work with newer puppet versions. Untested by me, but this probably fixes compatibility with puppet 4. For that contribution my thanks go to Daine Danielson.Apart from this changes I’ve added a couple of beaker tests.If the module is of any use for you, I’d be happy for ratings at puppetforge. The same is true for critical feedback, bug reports or (even better 🙂 pull requests.

Testing puppet modules: an overview

When it comes to testing puppet modules, there are lot of options, but for someone entering the world of puppet module testing, the pure variety may seem overwhelming. This is a try to provide some overview. Testing puppet modules: an overview weiterlesen

Inbox: Zeroed.

E-Mail is a pest, a big time killer wasting your and my time each and every day. Of course it is also a valuable tool, one that no one can renounce. So how can it be of more use than trouble?

So far I’ve followed a no-delete policy when it comes to my mails, since space was not a problem at all. But it developed into a big nasty pile of mails, that brought regular distraction, each time I looked at my inbox. So I decided to adopt the Inbox zero concept.

Step 1: Get the pile down

My e-mails piled up since years, so I had around 10000 mails in my inbox, with some hundred being unread. I needed to get this pile down and started with the most recent mails, trying to identify clusters of mails, filtering for them and then following these steps:

  • prevent: A lot of mails I get are: newsletters and mailinglist posts (e.g. Debian lists and some open source products). For each of them, I decided if I really want them to go to my inbox. If not: unsubscribe.
  • file or delete: Do I need it for reference or should it go to trash? I trashed basically every newsletter and mail(s) for which copies exist (e.g. mailinglist posts) and archived everything where I was unsure. It doesn’t matter, really. Important is, that the inbox get’s down to zero, because that’s where you spend your daily time. Your archive folders can be as full as good as your search function is 😉

Since it wasn’t possible to decide on a course for every mail (that would be a bit like hoovering in the dessert), I did this only for the first 1000 of mails or so. All mails older than a month were marked read and moved to archive immediately after. Another approach would be to move all files to a folder called DMZ and go to step 2.

Step 2: Prepare for implanting some habits

Most mails are the opposite of good old hackish perl code: read only. They are easy to act on, when they come around: just archive or delete them.

But the rest will be what steals your time. Some mails require action, either immediately or in a while, some wait for a schedule, e.g. flight informations or reservation mails and stuff. Whatever the reason is, you want to keep them around, because they still have a purpose. There are various filing systems for those mails, most of them GTD variants. As a gmail user I found this variant, with multiple inboxes in a special gmail view, interesting and now give it a try.

One word about the archive folders. I can highly recommend to reduce the number of folders you archive to as much as possible.

Step 3: Get into habit

Now to the hard part. Get into habit with acting on your inbox. Do it regularly, maybe every hour or so and be prepared to do quick decisions.

Act on any mail immediately, which means either file/delete it, reply to it (if this is what takes less time) or „mark“ it according to your filing system as prepared in step 2. And if no mails arrived, then it’s a good moment to review your marked mails if any on them can be further processed.

Now let’s see weither my inbox will still be zeroed in a month from now.

Sharing code between puppet providers

So you’ve written that custom puppet type for something and start working on another puppet type in the same module. What if you needed to share some code between this types? Is there a way of code-reuse that works with the plugin sync mechanism?

Yes, there is.

Puppet even has two possible ways of sharing code between types.

Option #1: a shared base provider

A provider in puppet is basically a class associated with a certain (puppet) type and there can be a lot providers for a single type (just look at the package provider!). It seems quiet natural, that it’s possible to define a parent class for those providers. So natural, that even the official puppet documentation writes about it.

Option #2: Shared libraries

The second option is a shared library, shipped in a certain namespace in the lib directory of the module, whereas the idea is mostly sketched in the feature ticket #14149. Basically one defines a class in the special Puppetx namespace, using the author- and module-name in the class name, in order to avoid conflicts with other modules.

This example would be saved to

lib/<author>/<modulename>

in your module’s folder and be included in your provider with something along the following:

Compatibility with Puppet 4:

In puppet 4 the name of the namespace has changed slightly. It’s now called ‚PuppetX‘ instead of ‚Puppetx‘ and is stored in a file ‚puppet_x.rb‘, which means that the require and the module name itself need to be changed:

For backward compatibility with puppet 3 you could instead add something like this, according to my co-worker mxey, who knows way more about ruby then I do:

Apart from this you’d need to change the require to be conditional on the puppet-version and refer to the module by the aliased version (which is left as an exercise for the reader ;))

WordPress(ed) again.

I just migrated my blog(s) to WordPress.

Just recently I decided to put more time into blogging again. I wasn’t entirely happy with Movable Type anymore, especially since the last update broke my customized theme and I struggled with the installation of another theme, which basically just were never found, no matter which path I put it into.

What I wanted is just more blogging, not all that technical stuff. And since the Movable Type makers also seem to have gone crazy (their „Getting started“ site tells users to head over to movabletype.com to get a 999$ license) I decided to get back to WordPress.

There were reasons, why I haven’t chosen WordPress back when I migrated from blogger to MT, but one has to say anyway, that things have moved a lot since then. And WordPress is as easy as it can be and has a prosper community, something I cannot say about MovableType.

The migration went okay, although there were some oddities in the blog entries exported by MT (e.g. datetime strings with EM and FM at the EOL) and I needed to figure out how the multisite feature in WordPress works. But now I have exactly what I want.

Resources about writing puppet types and providers

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy.

Now where to start, if you want to write your own type?

Overview: modeling and providing types

First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers.

The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It’s a good idea to start with a rough idea of what properties you’ll be manage with your resource and what values they will accept, since the type also does the job of validation.

What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions.

A combination of a type and a matching provider is what forms a (custom) resource type.

Resources

Next I’ll show you some resources about puppet provider development, that I found useful:

Official documentation:

Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details:

Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza:

Books:
The probably most complete information, including explanations of the puppet resource model and it’s resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu.

The puppet source:
Last but not least, it’s always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

Bringing GVFS to a good use

One of the GNOME features I really liked since the beginning of my GNOME usage is the ability to mount various network file system by a few clicks and keystrokes. It enables me to quickly access NFS shares or files via SFTP. But so far these mounts weren’t actually mounts in a classical sense, so they were only rudimentary useful.

As a user who often works with terminals I was always halfway happy with that feature and halfway not:

– Applications have to be aware and enabled to make use of that feature, so its often neccessary to workaround problems (e.g. movie players not able to open a file on a share)
– No shell access to files

Previously this GNOME feature was realised with an abstraction layer called GNOME VFS, which all applications needed to use if they wanted to provide access to the „virtual mounts“. It did no efforts to actually re-use common mechanisms of Un*x-like systems, like mount points. So it were doomed to fail at certain degrees.

Today GNOME uses a new mechanism, called GVFS. Its realized by a shared library and daemon components communicating over DBUS. At first glance it does not seem to change anything, so I was rather disappointed. But then I heard rumors, that Ubuntu was actually making these mounts available in a special mount point in ~/.gvfs.
My Debian GNOME installation were not.

So I investigated a bit and found evidence about a daemon called gvfs-fuse-daemon, which eventually is handling that. After that I figured this daemon to be in a package called gvfs-fuse and learned that installing it and restarting my GNOME session is actually all needed to do.
Now getting shell access to my GNOME „Connect to server“ mounts is actually possible, which makes these mounts really useful after all. Only thing to find out is, if e.g. the video player example now works from Nautilus. But if it doesn’t I’m still able to use it via a shell.

The solution is quiet obvious, on the one side. But totally non-obvious on the other.

A common user eventually will not find that solutin without aid. After all the package name does not really suggest what the package is used for, since its referring to technologies instead of the problem it solves. Which is understandable. What I don’t understand is, why this package is not a dependency of the gnome meta package. But I haven’t yet asked the maintainer, so I cannot really blame anybody.

However: Now GVFS is actually useful.

Why Gnome3 sucks (for me)

When I started using Linux, I started with a desktop environment (KDE) and then tried a lot of (standalone) window managers, including but not limited to Enlightenment, Blackbox, Fluxbox and Sawfish. But I was never really satisfied as it felt as if something was missing.
It then came, that I became a user of a desktop environment again. Now I have been a GNOME user for at least five years.

Among the users of desktop environments, I’m probably not a typical user. In 2009 my setup drifted from a more or less standard GNOME 2.3 to a combination of GNOME and a tiling window manager, which I called Gnomad, as a logical continuation of something I’ve done for a long time since using computers: Simplifying tasks, which are not my main business.
I just didn’t want to care about the hundred techniques to auto mount an USB stick or similar tasks, which are handed just fine by a common Desktop Environment. And I didn’t want to care about arranging windows, because after all the arrangement of my windows was always more or less the same.
But there were rumors that GNOME3 significantly changed the user experience and I wanted to give it a try at some point in the future. This try was forced by latest updates in Debian unstable, so I tested it for some days.

Day 1: Getting to know each other
My first day was GNOME3 was a non-working-day. When I’m at home I’m mostly using my computer for some chatting and surfing in the web, so I don’t have great demands on the
Window manager/Desktop Environment.
Accordingly the very first experience with GNOME3 was mostly a good one, except some minor issues.
The first thing to notice in a positive way, is the activities screen. I guess this one is inspired by Mac Exposé, but its nevertheless a nice thing, as it provides an overview over opened applications.
Apart from that, its possible to launch applications from there. The classical application menu is gone, but this one is better. One can either choose with the mouse or start typing the applications name and it will incrementally search for it and show it immediately. Hitting Enter is enough to launch the application.
Additionally, on the left, there is a launcher for your favorite applications.

This one lead to the first question mark above my head.

I had opened a terminal via this launcher and now wanted to open another terminal, after I switched to a different workspace.
So I just clicked it again and had to notice that GNOME developers and I have a different perception, of whats intuitive, because that click
led me back to the terminal on the first workspace. It took me some minutes to realize how I’m able to start a second terminal, by just right clicking on the icon and click on Open new window or similar.

Day 2: Doing productive work

The next day was a work day and I was on a customer appointment to do support/maintenance tasks. On this appointments my notebook is not my primary work machine and so I could softly go over to using GNOME3 when doing productive work.
I can say that it worked, although I soon started to miss some keystrokes which I’m used to. Like switching workspaces with Meta4+Number or at least switch workspaces by cycling through them with STRG+Alt+Left and Right Arrow Keys. While the first is a shortcut specific to my GNOmad setup, the latter is something I knew back from the good old Gnome2 days.
It just vanished from the default keybindings and did nothing. Appearently, as I learned afterwards, it has been decided to use the Up/Down arrow keys instead.

While for new users this will not be a problem at all, this is really hard for someone using GNOME for about 5 years as these are keystrokes one is really used to.

Day 3: Going mulithead


The appointment ended on the third day at afternoon, so when I came back into the office, I had the chance to test the whole thing in my usual work environment. At office I have my notebook attached to a docking station which has a monitor attached to it. So usually I work in dual head mode, with my primary work screen being the bigger external screen.

That was the point, where GNOME3 became painful.


At first everything was fine. GNOME3 detected the second monitor and made it available for use with the correct resolution. But things started to become ugly, when I actually wanted to work with it. GNOME3 decided that the internal screen is the primary screen, so the panel (or what has stood around from it) was on that screen. I can live with that, as thats basically the same with GNOME2, but the question was: How to start an application in a way that its started on the big screen?
I knew that I couldn’t just use the keystrokes I’m used to, like Meta4+p, which were bound to launching dmenu in my GNOmad setup, as I knew that I was not running GNOmad at present. So I thought hard and remembered that GNOME had a run dialog itself, bound to Alt+F2. Relieved I noticed, that this shortcut had not gone away. I typed ‚chromium‘ and waited. A message appeared telling me that the file was not found. Okay. No, wait. What? I did not uninstall it, so I guess it should be there.
I tried several other applications and all were told not to be available. Most likely this is a bug and bugs happen but this was really serious for me.

Another approach was to use the activity screen. At first I used it manually, by moving the mouse to there, launch chromium (surprise, surprise, it was still there) and moved it to the right screen, because I haven’t found a shorter way to do that. There must be a better way to that, I thought and so I googled. Actually there are more then one better way to do it.

  1. There is a hidden hot spot in the corner of the second screen, too. If one finds and moves the mouse over it, the activity screen will open on the primary monitor and on the secondary monitor, but the application list is only one the first. One can now type what he wants to start, hit Enter and Tada its on the screen where my mouse is. Not very intuitive, in my opinion, and I really would prefer if I had the same level of choice on the second screen.
  2. I can hit Meta4 and its opening the activitiy screen. From there everything is the same as described above.
There were many other small quirks that disturbed me, like that the desktop has vanished away (I used it seldom, but it was irritating that it wasn’t there anymore), shortcuts I were missing and so on.  lot of this is really specific to me being used to my previous setup, but I can’t help myself but I really need those little helpers.

So, at some point I decided to go back to GNOmad again, knowing that I will run into the next problem again, because I would have to permanently disable the new gnome3 mode and instead launch GNOME in the fallback mode. Luckily that is as easy as typing the following in a terminal

gsettings set org.gnome.desktop.session session-name ‚gnome-fallback‘
I quickly got this working again, but had to notice another cruel thing in GNOME3, that even disturbed my GNOmad experience. GNOME3 now binds to Meta4+p to a function, which switches internal/extern monitor setting, and now that is a real PITA.

From this point on another journey began, that eventually ended in switch to a Gnome/Awesome setup but this is a different story for a different time.