aptituz/ssh 2.3.2 published

I’ve just uploaded an update version of  my puppet ssh module to the forge.

The module aims at being a generic module to manage of ssh server and clients, including key generation and known_hosts management. It provides a mechanism to generate and deploy ssh keys without the need of storeconfig or PuppetDB but a server-side cache instead. This is neat, if you want to remain ssh keys during a reprovisioning of a host.


The update is mostly to push out some patches I’ve received from contributors via pull requests in the last few months. It adds:

  • Support for the AllowUsers, AllowGroups and DenyUsers aswell as DenyGroups parameters in the default sshd_config template. Thanks to cachaldora  for the patches.
  • Support for multiple ports in the default sshd template. Thanks to Arnd Hannemann for that patch.
  • Fixes in the template for it to work with newer puppet versions. Untested by me, but this probably fixes compatibility with puppet 4. For that contribution my thanks go to Daine Danielson.Apart from this changes I’ve added a couple of beaker tests.If the module is of any use for you, I’d be happy for ratings at puppetforge. The same is true for critical feedback, bug reports or (even better :) pull requests.

Inbox: Zeroed.

E-Mail is a pest, a big time killer wasting your and my time each and every day. Of course it is also a valuable tool, one that no one can renounce. So how can it be of more use than trouble?

So far I’ve followed a no-delete policy when it comes to my mails, since space was not a problem at all. But it developed into a big nasty pile of mails, that brought regular distraction, each time I looked at my inbox. So I decided to adopt the Inbox zero concept.

Step 1: Get the pile down

My e-mails piled up since years, so I had around 10000 mails in my inbox, with some hundred being unread. I needed to get this pile down and started with the most recent mails, trying to identify clusters of mails, filtering for them and then following these steps:

  • prevent: A lot of mails I get are: newsletters and mailinglist posts (e.g. Debian lists and some open source products). For each of them, I decided if I really want them to go to my inbox. If not: unsubscribe.
  • file or delete: Do I need it for reference or should it go to trash? I trashed basically every newsletter and mail(s) for which copies exist (e.g. mailinglist posts) and archived everything where I was unsure. It doesn’t matter, really. Important is, that the inbox get’s down to zero, because that’s where you spend your daily time. Your archive folders can be as full as good as your search function is 😉

Since it wasn’t possible to decide on a course for every mail (that would be a bit like hoovering in the dessert), I did this only for the first 1000 of mails or so. All mails older than a month were marked read and moved to archive immediately after. Another approach would be to move all files to a folder called DMZ and go to step 2.

Step 2: Prepare for implanting some habits

Most mails are the opposite of good old hackish perl code: read only. They are easy to act on, when they come around: just archive or delete them.

But the rest will be what steals your time. Some mails require action, either immediately or in a while, some wait for a schedule, e.g. flight informations or reservation mails and stuff. Whatever the reason is, you want to keep them around, because they still have a purpose. There are various filing systems for those mails, most of them GTD variants. As a gmail user I found this variant, with multiple inboxes in a special gmail view, interesting and now give it a try.

One word about the archive folders. I can highly recommend to reduce the number of folders you archive to as much as possible.

Step 3: Get into habit

Now to the hard part. Get into habit with acting on your inbox. Do it regularly, maybe every hour or so and be prepared to do quick decisions.

Act on any mail immediately, which means either file/delete it, reply to it (if this is what takes less time) or “mark” it according to your filing system as prepared in step 2. And if no mails arrived, then it’s a good moment to review your marked mails if any on them can be further processed.

Now let’s see weither my inbox will still be zeroed in a month from now.

Sharing code between puppet providers

So you’ve written that custom puppet type for something and start working on another puppet type in the same module. What if you needed to share some code between this types? Is there a way of code-reuse that works with the plugin sync mechanism?

Yes, there is.

Puppet even has two possible ways of sharing code between types.

Option #1: a shared base provider

A provider in puppet is basically a class associated with a certain (puppet) type and there can be a lot providers for a single type (just look at the package provider!). It seems quiet natural, that it’s possible to define a parent class for those providers. So natural, that even the official puppet documentation writes about it.

Option #2: Shared libraries

The second option is a shared library, shipped in a certain namespace in the lib directory of the module, whereas the idea is mostly sketched in the feature ticket #14149. Basically one defines a class in the special Puppetx namespace, using the author- and module-name in the class name, in order to avoid conflicts with other modules.

This example would be saved to


in your module’s folder and be included in your provider with something along the following:

Compatibility with Puppet 4:

In puppet 4 the name of the namespace has changed slightly. It’s now called ‘PuppetX’ instead of ‘Puppetx’ and is stored in a file ‘puppet_x.rb’, which means that the require and the module name itself need to be changed:

For backward compatibility with puppet 3 you could instead add something like this, according to my co-worker mxey, who knows way more about ruby then I do:

Apart from this you’d need to change the require to be conditional on the puppet-version and refer to the module by the aliased version (which is left as an exercise for the reader ;))

WordPress(ed) again.

I just migrated my blog(s) to WordPress.

Just recently I decided to put more time into blogging again. I wasn’t entirely happy with Movable Type anymore, especially since the last update broke my customized theme and I struggled with the installation of another theme, which basically just were never found, no matter which path I put it into.

What I wanted is just more blogging, not all that technical stuff. And since the Movable Type makers also seem to have gone crazy (their “Getting started” site tells users to head over to movabletype.com to get a 999$ license) I decided to get back to WordPress.

There were reasons, why I haven’t chosen WordPress back when I migrated from blogger to MT, but one has to say anyway, that things have moved a lot since then. And WordPress is as easy as it can be and has a prosper community, something I cannot say about MovableType.

The migration went okay, although there were some oddities in the blog entries exported by MT (e.g. datetime strings with EM and FM at the EOL) and I needed to figure out how the multisite feature in WordPress works. But now I have exactly what I want.

Resources about writing puppet types and providers

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy.

Now where to start, if you want to write your own type?

Overview: modeling and providing types

First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers.

The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It’s a good idea to start with a rough idea of what properties you’ll be manage with your resource and what values they will accept, since the type also does the job of validation.

What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions.

A combination of a type and a matching provider is what forms a (custom) resource type.


Next I’ll show you some resources about puppet provider development, that I found useful:

Official documentation:

Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details:

Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza:

The probably most complete information, including explanations of the puppet resource model and it’s resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu.

The puppet source:
Last but not least, it’s always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

Bringing GVFS to a good use

One of the GNOME features I really liked since the beginning of my GNOME usage is the ability to mount various network file system by a few clicks and keystrokes. It enables me to quickly access NFS shares or files via SFTP. But so far these mounts weren’t actually mounts in a classical sense, so they were only rudimentary useful.

As a user who often works with terminals I was always halfway happy with that feature and halfway not:

– Applications have to be aware and enabled to make use of that feature, so its often neccessary to workaround problems (e.g. movie players not able to open a file on a share)
– No shell access to files

Previously this GNOME feature was realised with an abstraction layer called GNOME VFS, which all applications needed to use if they wanted to provide access to the “virtual mounts”. It did no efforts to actually re-use common mechanisms of Un*x-like systems, like mount points. So it were doomed to fail at certain degrees.

Today GNOME uses a new mechanism, called GVFS. Its realized by a shared library and daemon components communicating over DBUS. At first glance it does not seem to change anything, so I was rather disappointed. But then I heard rumors, that Ubuntu was actually making these mounts available in a special mount point in ~/.gvfs.
My Debian GNOME installation were not.

So I investigated a bit and found evidence about a daemon called gvfs-fuse-daemon, which eventually is handling that. After that I figured this daemon to be in a package called gvfs-fuse and learned that installing it and restarting my GNOME session is actually all needed to do.
Now getting shell access to my GNOME “Connect to server” mounts is actually possible, which makes these mounts really useful after all. Only thing to find out is, if e.g. the video player example now works from Nautilus. But if it doesn’t I’m still able to use it via a shell.

The solution is quiet obvious, on the one side. But totally non-obvious on the other.

A common user eventually will not find that solutin without aid. After all the package name does not really suggest what the package is used for, since its referring to technologies instead of the problem it solves. Which is understandable. What I don’t understand is, why this package is not a dependency of the gnome meta package. But I haven’t yet asked the maintainer, so I cannot really blame anybody.

However: Now GVFS is actually useful.

Why Gnome3 sucks (for me)

When I started using Linux, I started with a desktop environment (KDE) and then tried a lot of (standalone) window managers, including but not limited to Enlightenment, Blackbox, Fluxbox and Sawfish. But I was never really satisfied as it felt as if something was missing.
It then came, that I became a user of a desktop environment again. Now I have been a GNOME user for at least five years.

Among the users of desktop environments, I’m probably not a typical user. In 2009 my setup drifted from a more or less standard GNOME 2.3 to a combination of GNOME and a tiling window manager, which I called Gnomad, as a logical continuation of something I’ve done for a long time since using computers: Simplifying tasks, which are not my main business.
I just didn’t want to care about the hundred techniques to auto mount an USB stick or similar tasks, which are handed just fine by a common Desktop Environment. And I didn’t want to care about arranging windows, because after all the arrangement of my windows was always more or less the same.
But there were rumors that GNOME3 significantly changed the user experience and I wanted to give it a try at some point in the future. This try was forced by latest updates in Debian unstable, so I tested it for some days.

Day 1: Getting to know each other
My first day was GNOME3 was a non-working-day. When I’m at home I’m mostly using my computer for some chatting and surfing in the web, so I don’t have great demands on the
Window manager/Desktop Environment.
Accordingly the very first experience with GNOME3 was mostly a good one, except some minor issues.
The first thing to notice in a positive way, is the activities screen. I guess this one is inspired by Mac Exposé, but its nevertheless a nice thing, as it provides an overview over opened applications.
Apart from that, its possible to launch applications from there. The classical application menu is gone, but this one is better. One can either choose with the mouse or start typing the applications name and it will incrementally search for it and show it immediately. Hitting Enter is enough to launch the application.
Additionally, on the left, there is a launcher for your favorite applications.

This one lead to the first question mark above my head.

I had opened a terminal via this launcher and now wanted to open another terminal, after I switched to a different workspace.
So I just clicked it again and had to notice that GNOME developers and I have a different perception, of whats intuitive, because that click
led me back to the terminal on the first workspace. It took me some minutes to realize how I’m able to start a second terminal, by just right clicking on the icon and click on Open new window or similar.

Day 2: Doing productive work

The next day was a work day and I was on a customer appointment to do support/maintenance tasks. On this appointments my notebook is not my primary work machine and so I could softly go over to using GNOME3 when doing productive work.
I can say that it worked, although I soon started to miss some keystrokes which I’m used to. Like switching workspaces with Meta4+Number or at least switch workspaces by cycling through them with STRG+Alt+Left and Right Arrow Keys. While the first is a shortcut specific to my GNOmad setup, the latter is something I knew back from the good old Gnome2 days.
It just vanished from the default keybindings and did nothing. Appearently, as I learned afterwards, it has been decided to use the Up/Down arrow keys instead.

While for new users this will not be a problem at all, this is really hard for someone using GNOME for about 5 years as these are keystrokes one is really used to.

Day 3: Going mulithead

The appointment ended on the third day at afternoon, so when I came back into the office, I had the chance to test the whole thing in my usual work environment. At office I have my notebook attached to a docking station which has a monitor attached to it. So usually I work in dual head mode, with my primary work screen being the bigger external screen.

That was the point, where GNOME3 became painful.

At first everything was fine. GNOME3 detected the second monitor and made it available for use with the correct resolution. But things started to become ugly, when I actually wanted to work with it. GNOME3 decided that the internal screen is the primary screen, so the panel (or what has stood around from it) was on that screen. I can live with that, as thats basically the same with GNOME2, but the question was: How to start an application in a way that its started on the big screen?
I knew that I couldn’t just use the keystrokes I’m used to, like Meta4+p, which were bound to launching dmenu in my GNOmad setup, as I knew that I was not running GNOmad at present. So I thought hard and remembered that GNOME had a run dialog itself, bound to Alt+F2. Relieved I noticed, that this shortcut had not gone away. I typed ‘chromium’ and waited. A message appeared telling me that the file was not found. Okay. No, wait. What? I did not uninstall it, so I guess it should be there.
I tried several other applications and all were told not to be available. Most likely this is a bug and bugs happen but this was really serious for me.

Another approach was to use the activity screen. At first I used it manually, by moving the mouse to there, launch chromium (surprise, surprise, it was still there) and moved it to the right screen, because I haven’t found a shorter way to do that. There must be a better way to that, I thought and so I googled. Actually there are more then one better way to do it.

  1. There is a hidden hot spot in the corner of the second screen, too. If one finds and moves the mouse over it, the activity screen will open on the primary monitor and on the secondary monitor, but the application list is only one the first. One can now type what he wants to start, hit Enter and Tada its on the screen where my mouse is. Not very intuitive, in my opinion, and I really would prefer if I had the same level of choice on the second screen.
  2. I can hit Meta4 and its opening the activitiy screen. From there everything is the same as described above.
There were many other small quirks that disturbed me, like that the desktop has vanished away (I used it seldom, but it was irritating that it wasn’t there anymore), shortcuts I were missing and so on.  lot of this is really specific to me being used to my previous setup, but I can’t help myself but I really need those little helpers.

So, at some point I decided to go back to GNOmad again, knowing that I will run into the next problem again, because I would have to permanently disable the new gnome3 mode and instead launch GNOME in the fallback mode. Luckily that is as easy as typing the following in a terminal

gsettings set org.gnome.desktop.session session-name ‘gnome-fallback’
I quickly got this working again, but had to notice another cruel thing in GNOME3, that even disturbed my GNOmad experience. GNOME3 now binds to Meta4+p to a function, which switches internal/extern monitor setting, and now that is a real PITA.

From this point on another journey began, that eventually ended in switch to a Gnome/Awesome setup but this is a different story for a different time.

Migrating from blogspot to Movable Type

A while ago I decided to migrate my existing blogspot blog to an own domain and webspace again. My reasoning was mostly, that blogspot lacked some features, which I’d like to have in my blog.
Additionally, my requirements have changed a bit since I originally moved to blogspot and, last but not least, Blogspot was a compromise anyway.

So I started re-evaluating a possible software platform for my blog. In my numerous previous attempts to start blogging (there were several blogs of me in the internet since at least 2006), before I moved to Blogspot, I used WordPress. But there were quiet some reasons against it, one of the biggest concerns being its security history. Also, while I worked a lot with PHP in the past years, I have developed a serious antipathy against software written in PHP, which I couldn’t just ignore.

In the end, the decision fell on Movable Type, because its written in Perl, which is the language I prefer for most of my projects, because its features were matching my wishes (mostly) and because I heard some good opinions about it. Also it is used by my employer for our company blog.

So the next question was: How to migrate?

I decided to use Movable Type 5, although, at present, it seems not to be the communities choice. At least the list of plugins supporting MT5 is really short. Foremost there was no plugin to import blogger posts, which, after all, was the most important thing about the migration.
Luckily there is such a plugin for Movable Type and so I basically did the following:

  1. Install Movable Type 4
  2. Install the Blogger Import Plugin
  3. Import posts (it supports either the export file of blogger or directly importing posts via the Google API)
  4. Upgrade to Movable Type 5
  5. Check the result
Check the results, or: The missing parts
Obviously such an import is not perfect. Some posts contain images or in-site links. The importer is not able to detect that and honestly it would have a hard time to track that anyway.
So as soon as content is migrated, its time to look for the missing parts.

The process to find missing parts is basically very easy and common among the various missing parts:
Just search for your blogspot URL via the Search & Replace option in the Movable Type administration.

Now how to fix that? For links its quiet easy (although I forgot about them in the first run), as long as the permalinks have kept the same scheme.
In my case that is the case, since I decided to use the Preferred Archive option “Entry” in the blog settings for the new blog and the default (if there is an option, because I don’t know)
in Blogspot. The importer does import the Basename of the document, so fixing links is just a matter of replacing the domain part of the URL.

For images its some more work. One has to get the images somehow and upload them in Movable Type. Eventually it then boils down to search and replace, but I decided to do that manually, since I only have a very low number of images in my posts so far.

After that I did everything else, which is not specific to the migration, like picking a template, modifying it to my wishes, considering the additions of plugin etc.
And here we are. There were some issues during the migration, which I haven’t handled here. I will blog about them another time.

Migrating from blogspot to Movable Type weiterlesen

PHP and big numbers

One would expect, that one of the most used script languages of the world would be ableto do proper comparisons of numbers, even big numbers, right?Well, PHP is not such a language, at least not on 32bit systems.Given a script like this:


$t1 = “1244431010010381771”;

$t2 = “1244431010010381772”;

if ($t1 == $t2) {

    print “equal\n”;



A current PHP version will output:

schoenfeld@homer ~ % php5 test.php

It will do the right thing on 64bit systems (not claiming that the numbers are equal).Interesting enough: An equal-type-equality check (see my article from a few years ago) will not tell that the two numbers are equal.