Directory-dependent shell configuration with zsh (Update)

For a while I’ve been struggling with a little itch. I’m using my company notebook for company work and for Debian related stuff. Now, whenever I switch between those two contexts, I had to manually fix the environment configuration. This is mostly related to environment variables, because tools like dch et cetera rely on some, which need to be different for the different contexts, like DEBEMAIL.
A while ago I had the idea to use directory dependent configuration for that purpose, but I never found the time and mood to actually fix my itch.
Somewhere in the meanwhile I applied a quick hack („case $PWD in …; do export…; esac“) to my zsh configuration to ease the pain, but it still did not feel right.


For the impatient: Below you find a way to just use whats described here. The rest of the article just contains detailed information on how to implement something like this.


The other day I were cleaning up and extending my zsh configuration and it came to my mind again. I then thought about what my requirements are and how I could solve it. First I thought about using a ready solution, like the one in the Grml Zsh configuration, but at that point I did not remember it (it needed a hint by a co-worker *after* I finished the first version of my solution). Then I came up with my requirements:

  • Separate profile changing logic from configuration (as far as possible):I don’t want to re-dive into a script logic every time I decide to change something, like adding a variable or changing it. Generally I find a declarative approach much cleaner.
  • Avoid repeating myself
    Basically all I do when switching profiles is to change environment variables. Usually I don’t want my shell to do extraordinary things, like brewing coffee when I switch the context, so I’d like to avoid typing an „export foobar..“ for every single environment variable and every single profile.
It lead to a configuration approach as a first start. When thinking about how to represent the configuration I looked into the supported data types in zsh. zsh supports arrays, which is perfect for my need. I came up with something like that:

typeset -A EMAILS ENV_DEBIAN ENV_COMPANY
EMAILS=(
  „private“     „foo@bar.org“
  „company“     „baz@foo.org“
  „debian“      „schoenfeld@debian.org“
)
ENV_DEBIAN=(
  „DEBEMAIL“  „$EMAILS[debian]“
  „EMAIL“     „$EMAILS[debian]“
 )
ENV_COMPANY=(
  „DEBEMAIL“  „$EMAILS[company]“
)

The next part was selecting the right profile. In the first version I used the old case logic, but it was breaking my separate logic and configuration paradigm. Approximate at this point the co-worker lead me to the grml approach, which I borrowed an idea from:


# Configure profile mappings
zstyle ‚:chpwd:profiles:*company*‘ profile company
zstyle ‚:chpwd:profiles:*debian*‘ profile debian

and the following code to lookup profile based on $PWD:


1 function detect_env_profile {
2   local profile
3   zstyle -s „:chpwd:profiles:${PWD}“ profile profile || profile=’default‘
4   profile=${(U)profile}
5   if [ „$profile“ != „$ENV_PROFILE“ ]; then
6   print „Switching to profile: $profile“
7   fi
8   ENV_PROFILE=“$profile“
9 }

For an explanation: zstyle is a zsh-builtin which is used to „define and lookup styles“, as the manpage says, or put different: Another way to store and lookup configuration values.
Its nice for my purpose, because it allows storing patterns instead of plain configuration values which can be compared against $PWD easily with all of the zsh globbing magic. This is basically whats done in line 3. zstyle then sets $profile to the matching zstyle configuration in the :chpwd:profiles: context or to ‚default‘ if no matching zstyle is found.

The (almost) last part is putting it together with code to switch the profile:


1 function switch_environment_profiles {
2   detect_env_profile
3   config_key=“ENV_$ENV_PROFILE“
4   for key value in ${(kvP)config_key}; do
5     export $key=$value
6   done
7}

The only non-obvious part in this are lines 3 and 4. Remember, the profiles were defined as ENV_PROFILE, where PROFILE is the name of the profile. We cannot know that key in advance, therefore we have to construct the right environment variable from the result of detect_env_profile. We do that in line 3 and lookup this environment variable in line 4.
The deciding aspect for that is the P-flag in the parameter expansion. It tells zsh that we do not want the value of $config_key, but instead the value of $WHATEVER_CONFIG_KEY_EXPANDS_TO.
The other flags k and v tell zsh that, from the array, we want both: keys and values. If we’d omitted those flags it would have given us the values only.
We then loop over that to configure the environment. Easy, hu?

We would be finished, if this would do anything. The code above needs to be called. Lucky for us thats pretty easy to achieve, as zsh has a hook for when a directory is changed. Making all this work is simply a matter of adding something like this:


function chpwd() {
  switch_environment_profiles
}

Now, one could say, that the solution in the grml configuration has an advantage. It allows calling arbitrary commands on profile changing, which might be useful to *unset* variables in a given profile or whatever you could think of.
Well, its a matter of three lines to extend the above code for that feature:


# Taken from grml zshrc, allow chpwd_profile_functions()
if (( ${+functions[chpwd_profile_$ENV_PROFILE]} )) ; then
  chpwd_profile_${ENV_PROFILE}
fi

to the end of switch_environment_profiles and now its possible to additionall
y add a function chpwd_profile_PROFILE which is called whenever the profile is changed to that profile.



USAGE: I have put the functions into a file which can be included into your zsh configuration, which can be found on github.
Please see this README and the comments in the file itself on further usage instructions.

FAI, my notebook and me

I use to take my (company) notebook with me on business travels.
Two times I now had the unlucky situation that something bad happened to it on such an occassion. Whenever you get in the situation that you need to reinstall your system in a hotel room you’ll might have the same wish that I got: A way to quickly bring the system in a state where I could work with it.

Well, I used FAI a while back for a customer. Its a real great tool for automated installations and I really prefer it over debian-installer preseeding. Apart from the fact that the partitioning is way easier it also gives me the power to complete the whole installation up to a point where I’ve got almost nothing to do anymore. It also features an installation completely from CD or USB-Stick which makes it suitable for me.

However, my notebook installation has a little „caveat“ which made that a little bit more harder as previously thought. As it is a notebook and I carry company data on it it has to be encrypted. Disk encryption at a whole.
The stable FAI version does not support this.
The problem is: The current support for crypto in setup-storage (FAIs disk setup tool) is not very far. Supported is the creating of a LUKS container with a keyfile, saving this keyfile to the FAI $LOGDIR and creating a crypttab.
Unfortunately for a root filesystem this would leave us with an unbootable system, because this requires manual interaction. And on the other hand using a keyfile for a cryptoroot is a no-go anyway. We want a passphrase.
On a side-note: cryptoroot support with a keyfile is more complex than with a passphrase, as you have to provide a script that knows how to get to the key.

So I started experiments with scripts in the FAI-configuration that added a passphrase, changed the crypttab and recreated the crypttab. That worked, although it was very ugly.
 But due to a good coorperation with Michael Tautschnig, a FAI- and Debian-Developer, on this, the FAI experimental version 4.0~beta2+experimental18
now supports LUKS-volumes with a passphrase that can be specified in the disk_config.

Now its actually possible to setup a system like mine with FAI out-of-the-box. One thing (apart from the FAI configuration and setup as you want and need it) has to be done, anyway:
The initrd support of cryptsetup requires busybox (otherwise you will see a lot of „command not found“ errors and you system won’t boot) and it requires initramfs-tools, which is standard nowadays.
So you have to make sure that these packages are in your package config!

So now I can define a FAI-profile for my notebook, create a partial fai mirror with the packages it needs and put all this together on an USB stick with fai-cd (don’t worry about the name, it can be used to create ISO images as well). I can carry this with me and if I need it I stick it into my notebook and let FAI automatically reinstall my system. Nice 🙂

Update: Somebody asked me, weither he understood me right, that I’d put my LUKS passphrase on a FAI usbstick clear-text. Obviously, the answer is and should be NO. What I do and what I’d suggest to others: Use a default passphrase in the FAI configuration, install with it – after all on a fresh installation there is not much to protect – and once it is finished *change* the passphrase to something secure by adding a new keyslot and removing the old.

Building a 15W Debian GNU/Linux system

When the Intel Atom was revealed to the public I didn’t came around to say: „Wow!“, because that piece of hardware promises to be a generic-x86 1.6 GHZ CPU with a total power consumption of 2 Watt, which is amazing considering that x86 hardware generally wasn’t an option if you wanted to build a low-power system. But then the first chipsets were presented to the public and the Atom became a farce, because you don’t want to have a chipset that eats over 25W for a CPU which consumes 2W. That was basically laughable.

Recently I managed to find out that there is a new chipset out there, the Intel i945GSE, which runs at about 11W TDP, including the soldered-on-board N270 atom cpu. And I convinced myself that this could get my new homeserver. So together with a 2.5″ drive I could get a system which runs with about 15W maximum power consumption, which is amazing, given that the Arcor Easybox my provider gave to me seems to have similar maximum power consumptions. And it isn’t able to provide me with the great flexibility, the new Atom system is.

So I bought the following components:

  • Intel Essential Series D945GSEJT
  • A Mini-ITX M350 case, which is amazing, because its about the size of a Linksys router and should still provide a good thermal environment.
  • 2GB Kingston HyperX DDR2 533 MHZ S0 DIMM
  • a Western Digital Scorpio 320 GB hard-drive

It took a while to get those components together, especially because I previously decided for an Antec case which I ordered from K&M Elektronik, but as they didn’t keep their delivery promise I came to the M350. What a luck.

Running Debian on this machine is the easy part, you would think. This is true, for some exceptions. First: Lenny runs fine. I’ve installed the notebook hd in my desktop and then put it in the atom, when I got the first hardware and it worked right away. Except of a grub message, which is disturbing and which I didn’t manage to fix right now (grub says „Error: No such disk“ just to get the menu a seconds later anyway and boot the system flawless).

What didn‘ t work exactly reliable was the included network chip. Its quiet a shame to say that, but if you buy an Intel board, wouldn’t you expect that it would run Intel components? Unfortunately this is not true for the atom board. It has a Realtek RTL8111 network chip, which isn’t properly supported by the 2.6.26 kernel (that means the kernel think it is and loads a rtl8169 module, which isn’t able to properly detect a link).
The workaround for this is to use a 8168 module from Realtek and compile it for your kernel, but as I equipped this system with an Atheros 2424 PCIe chipset for playing WLAN AP, too, I had to upgrade to 2.6.31 anyway and there the chip is fully supported by 8169.

Making the system an access point has been surprisinly easy as well. The greatest pain was to find a Mini PCIe WLAN card, because after all this isn’t very common. However I found one based on an Atheros 2424 chipset and bought it. I additionaly bought an SMA-antenna connector that I could mount into the case (the M350 has a preparation hole for it) and an SMA antenna.
Setting this up, has been fairly easy. You need to know, that running master mode with newer mac-subsystem-drivers in Linux doesn’t allow setting master mode directly. Instead you need to use an application to manage everything, which is capable of running cards over netlink. Thats hostapd. The unfortune is, that the lenny version is too old and so I built myself a (hacky) backport of the sid version, which isn’t that hard anyway, because rebuilding against Lenny is enough. Additional you need a kernel 2.6.30 with compat-wireless extensions, or an 2.6.31, because previously the ath5k driver didn’t support the master mode. After that getting hostapd up is a matter of a 4 – 15 lines configuration file. For me its now running in 802.11g with WPA and a short rekeying interval with 14 lines of configuration.

After all I’m satisfied with the system. Without any fan the CPU constantly runs at 55°C, which is okay, given that it must operate within 0 and 90 °C according to the tech specs. The system and the disk are somewhat lower (47 and 39°C). The power of this system is more then enough. Its booting quick and working with it works without latencies, even when the system is doing something. What I haven’t yet tested is weither the power consumption actually fulfills the expactations. I will do so, once I got a wattmeter.

Gnomad = Gnome + Xmonad

Since I started using Linux I’ve used several window managers. I felt used to blackbox and fluxbox and eventually used enlightenment and some others in the past, but nowadays its been a while since I became the user of an Desktop Environment. I’m using GNOME because it provides what I need and I don’t need to spend an hour to configure it before it suits my needs. After all I’m lazy.
With respect to the fact that I’m quiet satisfied with GNOME, there is one feature I were always missing.
Because I spend much time with tiling and arranging windows on my desktops I noticed that I could need
a tiling feature. Something which was already a feature back in Windows 3.11.
GNOME/metacity does not have this features and given that a wishlist bug about this is open since almost
7 years
its unlikley that this will ever change. There are separate tools, which I recently learned about that can assist me with this. For example the perl script ‚wumwum‚. But this seems to be the wrong solution to a real problem. Additional wumwum does not work properly with metacity and so I’d need to another WM anyway, which lead to the point where I started thinking about integrating a true tiling wm into GNOME… once again.

First, I looked into awesome, which is a window manager I used some time ago.
But documentation about configuring it is basically an API documentation, with no obvious entry point.
It seems to be the best to study the whole API just to set some simple settings (e.g. a padding for the GNOME panel and some always-floating applications). I even thought about learning LUA, because it seems like a language which is quick and easy to learn, but honestly if I need to study a programming language and a whole API documentation just to configure a window manager then IMHO there is something conceptionally wrong with that piece of software.
After all I came to Xmonad. This window manager is using Haskell and I fear I need to learn this language as well if I want to configure weird things. But the wanted scenario is well documented and documentation for more common configuration settings exists all over the place so that I don’t really feel inclined to learn more then needed.
Remember? I’m lazy.

Now I’m feeling quiet happy with this combination. It didn’t cost me much time to get used to the most basic keyboard shortcuts or setting the whole thing up. GNOME and Xmonad work together like a dream team. I feel more productive now. As an additional plus I reinstalled the vimperator firefox plugin, because with my new desktop environment I more often use the keyboard for ordinary tasks like switching between apps or desks and I felt beeing able to quickly operate firefox with the keyboard, too, would be a plus. Well, it is.

Syncing mails

Okay, so I had a simple job to be done. I have two mail accounts: A private mail account and a company mail account, which has two folders containing private mails. I want to synchronize these folders to my private account. Which is the right tool to choose? I thought that one way would be to fire up a graphical mail user agent, like lets say Thunderbird, setup the accounts and simply move the mails between them. But this has some implications, like:

  • I don’t use such a MUA (instead I use mutt), which makes it unneccessary hard, because I really need to setup the accounts
  • Most graphical MUAs I know are bad at working with a great number of mails. At least Thunderbird does not even properly show what he is doing and how long he is supposed to doing it

So I decided that I needed some small tool, that UN*X admins are used to have. After a small apt-cache search, I found two tools:

  • imapcopy
  • imapsync

I installed both and looked at them. For the impatient, a spoiler: I decided to use imapsync.
I had a quick look at imapcopy and it did not have a proper manpage.
Instead it refers to the built-in help (imapcopy -h) which is not useful either and to examples in /usr/share/doc.
After that I had look at imapsync. It comes with a pretty good manpage and a pretty good built-in usage information. Appearently I rate it very important that either the manpage or the built-in help are good enough to get started with a tool. Certainly I know that tools exist where a manpage is simply not enough, but I guess a tool to sync imap folders is not one of them.

After studying the manpage for about 2 minutes I was ready to construct a command line and give it a –dry try. This parameter lets me see what the tool would do if I would ommit it. That one looked good and so I gave it a shot. It then started to work. It has two flaws.

  1. Unfortunately it does not indicate its progress and the normal messages are not a good help either, because they contain numbers that actually do not refer to mails in one of the mailboxes (they are soon literally higher as the number of mails in both mailboxes) and I do understand what it is referring to.
  2. It sometimes crashed at random locations with random messages. I didn’t look deeper into it, because restarting the script helped and therefore I cannot speak of a easy reproducible problem. In 3000 mails it happened about 1-2 times, so not a great deal but still annoying.

Anyway, it did the job, which took some time, because of my bandwith.