January 5, 2012

Bringing GVFS to a good use

One of the GNOME features I really liked since the beginning of my GNOME usage is the ability to mount various network file system by a few clicks and keystrokes. It enables me to quickly access NFS shares or files via SFTP. But so far these mounts weren't actually mounts in a classical sense, so they were only rudimentary useful.

As a user who often works with terminals I was always halfway happy with that feature and halfway not:

- Applications have to be aware and enabled to make use of that feature, so its often neccessary to workaround problems (e.g. movie players not able to open a file on a share)
- No shell access to files

Previously this GNOME feature was realised with an abstraction layer called GNOME VFS, which all applications needed to use if they wanted to provide access to the "virtual mounts". It did no efforts to actually re-use common mechanisms of Un*x-like systems, like mount points. So it were doomed to fail at certain degrees.

Today GNOME uses a new mechanism, called GVFS. Its realized by a shared library and daemon components communicating over DBUS. At first glance it does not seem to change anything, so I was rather disappointed. But then I heard rumors, that Ubuntu was actually making these mounts available in a special mount point in ~/.gvfs.
My Debian GNOME installation were not.

So I investigated a bit and found evidence about a daemon called gvfs-fuse-daemon, which eventually is handling that. After that I figured this daemon to be in a package called gvfs-fuse and learned that installing it and restarting my GNOME session is actually all needed to do.
Now getting shell access to my GNOME "Connect to server" mounts is actually possible, which makes these mounts really useful after all. Only thing to find out is, if e.g. the video player example now works from Nautilus. But if it doesn't I'm still able to use it via a shell.

The solution is quiet obvious, on the one side. But totally non-obvious on the other.

A common user eventually will not find that solutin without aid. After all the package name does not really suggest what the package is used for, since its referring to technologies instead of the problem it solves. Which is understandable. What I don't understand is, why this package is not a dependency of the gnome meta package. But I haven't yet asked the maintainer, so I cannot really blame anybody.

However: Now GVFS is actually useful.

December 11, 2011

Why Gnome3 sucks (for me)

When I started using Linux, I started with a desktop environment (KDE) and then tried a lot of (standalone) window managers, including but not limited to Enlightenment, Blackbox, Fluxbox and Sawfish. But I was never really satisfied as it felt as if something was missing.
It then came, that I became a user of a desktop environment again. Now I have been a GNOME user for at least five years.

Among the users of desktop environments, I'm probably not a typical user. In 2009 my setup drifted from a more or less standard GNOME 2.3 to a combination of GNOME and a tiling window manager, which I called Gnomad, as a logical continuation of something I've done for a long time since using computers: Simplifying tasks, which are not my main business.
I just didn't want to care about the hundred techniques to auto mount an USB stick or similar tasks, which are handed just fine by a common Desktop Environment. And I didn't want to care about arranging windows, because after all the arrangement of my windows was always more or less the same.

But there were rumors that GNOME3 significantly changed the user experience and I wanted to give it a try at some point in the future. This try was forced by latest updates in Debian unstable, so I tested it for some days.

Day 1: Getting to know each other
My first day was GNOME3 was a non-working-day. When I'm at home I'm mostly using my computer for some chatting and surfing in the web, so I don't have great demands on the
Window manager/Desktop Environment.
Accordingly the very first experience with GNOME3 was mostly a good one, except some minor issues.
The first thing to notice in a positive way, is the activities screen. I guess this one is inspired by Mac Exposé, but its nevertheless a nice thing, as it provides an overview over opened applications.
Apart from that, its possible to launch applications from there. The classical application menu is gone, but this one is better. One can either choose with the mouse or start typing the applications name and it will incrementally search for it and show it immediately. Hitting Enter is enough to launch the application.
Additionally, on the left, there is a launcher for your favorite applications.

This one lead to the first question mark above my head.

I had opened a terminal via this launcher and now wanted to open another terminal, after I switched to a different workspace.
So I just clicked it again and had to notice that GNOME developers and I have a different perception, of whats intuitive, because that click
led me back to the terminal on the first workspace. It took me some minutes to realize how I'm able to start a second terminal, by just right clicking on the icon and click on Open new window or similar.

Day 2: Doing productive work
The next day was a work day and I was on a customer appointment to do support/maintenance tasks. On this appointments my notebook is not my primary work machine and so I could softly go over to using GNOME3 when doing productive work.
I can say that it worked, although I soon started to miss some keystrokes which I'm used to. Like switching workspaces with Meta4+Number or at least switch workspaces by cycling through them with STRG+Alt+Left and Right Arrow Keys. While the first is a shortcut specific to my GNOmad setup, the latter is something I knew back from the good old Gnome2 days.
It just vanished from the default keybindings and did nothing. Appearently, as I learned afterwards, it has been decided to use the Up/Down arrow keys instead.

While for new users this will not be a problem at all, this is really hard for someone using GNOME for about 5 years as these are keystrokes one is really used to.

Day 3: Going mulithead

The appointment ended on the third day at afternoon, so when I came back into the office, I had the chance to test the whole thing in my usual work environment. At office I have my notebook attached to a docking station which has a monitor attached to it. So usually I work in dual head mode, with my primary work screen being the bigger external screen.

That was the point, where GNOME3 became painful.

At first everything was fine. GNOME3 detected the second monitor and made it available for use with the correct resolution. But things started to become ugly, when I actually wanted to work with it. GNOME3 decided that the internal screen is the primary screen, so the panel (or what has stood around from it) was on that screen. I can live with that, as thats basically the same with GNOME2, but the question was: How to start an application in a way that its started on the big screen?
I knew that I couldn't just use the keystrokes I'm used to, like Meta4+p, which were bound to launching dmenu in my GNOmad setup, as I knew that I was not running GNOmad at present. So I thought hard and remembered that GNOME had a run dialog itself, bound to Alt+F2. Relieved I noticed, that this shortcut had not gone away. I typed 'chromium' and waited. A message appeared telling me that the file was not found. Okay. No, wait. What? I did not uninstall it, so I guess it should be there.
I tried several other applications and all were told not to be available. Most likely this is a bug and bugs happen but this was really serious for me.

Another approach was to use the activity screen. At first I used it manually, by moving the mouse to there, launch chromium (surprise, surprise, it was still there) and moved it to the right screen, because I haven't found a shorter way to do that. There must be a better way to that, I thought and so I googled. Actually there are more then one better way to do it.

  1. There is a hidden hot spot in the corner of the second screen, too. If one finds and moves the mouse over it, the activity screen will open on the primary monitor and on the secondary monitor, but the application list is only one the first. One can now type what he wants to start, hit Enter and Tada its on the screen where my mouse is. Not very intuitive, in my opinion, and I really would prefer if I had the same level of choice on the second screen.
  2. I can hit Meta4 and its opening the activitiy screen. From there everything is the same as described above.

There were many other small quirks that disturbed me, like that the desktop has vanished away (I used it seldom, but it was irritating that it wasn't there anymore), shortcuts I were missing and so on.  lot of this is really specific to me being used to my previous setup, but I can't help myself but I really need those little helpers.

So, at some point I decided to go back to GNOmad again, knowing that I will run into the next problem again, because I would have to permanently disable the new gnome3 mode and instead launch GNOME in the fallback mode. Luckily that is as easy as typing the following in a terminal

gsettings set org.gnome.desktop.session session-name 'gnome-fallback'

I quickly got this working again, but had to notice another cruel thing in GNOME3, that even disturbed my GNOmad experience. GNOME3 now binds to Meta4+p to a function, which switches internal/extern monitor setting, and now that is a real PITA.

From this point on another journey began, that eventually ended in switch to a Gnome/Awesome setup but this is a different story for a different time.

December 10, 2011

Migrating from blogspot to Movable Type

A while ago I decided to migrate my existing blogspot blog to an own domain and webspace again. My reasoning was mostly, that blogspot lacked some features, which I'd like to have in my blog.
Additionally, my requirements have changed a bit since I originally moved to blogspot and, last but not least, Blogspot was a compromise anyway.

So I started re-evaluating a possible software platform for my blog. In my numerous previous attempts to start blogging (there were several blogs of me in the internet since at least 2006), before I moved to Blogspot, I used Wordpress. But there were quiet some reasons against it, one of the biggest concerns being its security history. Also, while I worked a lot with PHP in the past years, I have developed a serious antipathy against software written in PHP, which I couldn't just ignore.

In the end, the decision fell on Movable Type, because its written in Perl, which is the language I prefer for most of my projects, because its features were matching my wishes (mostly) and because I heard some good opinions about it. Also it is used by my employer for our company blog.

So the next question was: How to migrate?

I decided to use Movable Type 5, although, at present, it seems not to be the communities choice. At least the list of plugins supporting MT5 is really short. Foremost there was no plugin to import blogger posts, which, after all, was the most important thing about the migration.
Luckily there is such a plugin for Movable Type and so I basically did the following:

  1. Install Movable Type 4
  2. Install the Blogger Import Plugin
  3. Import posts (it supports either the export file of blogger or directly importing posts via the Google API)
  4. Upgrade to Movable Type 5
  5. Check the result
Check the results, or: The missing parts

Obviously such an import is not perfect. Some posts contain images or in-site links. The importer is not able to detect that and honestly it would have a hard time to track that anyway.
So as soon as content is migrated, its time to look for the missing parts.

The process to find missing parts is basically very easy and common among the various missing parts:
Just search for your blogspot URL via the Search & Replace option in the Movable Type administration.

Now how to fix that? For links its quiet easy (although I forgot about them in the first run), as long as the permalinks have kept the same scheme.
In my case that is the case, since I decided to use the Preferred Archive option "Entry" in the blog settings for the new blog and the default (if there is an option, because I don't know)
in Blogspot. The importer does import the Basename of the document, so fixing links is just a matter of replacing the domain part of the URL.

For images its some more work. One has to get the images somehow and upload them in Movable Type. Eventually it then boils down to search and replace, but I decided to do that manually, since I only have a very low number of images in my posts so far.

After that I did everything else, which is not specific to the migration, like picking a template, modifying it to my wishes, considering the additions of plugin etc.
And here we are. There were some issues during the migration, which I haven't handled here. I will blog about them another time.
Continue reading Migrating from blogspot to Movable Type.

December 1, 2011

PHP and big numbers

One would expect, that one of the most used script languages of the world would be ableto do proper comparisons of numbers, even big numbers, right?Well, PHP is not such a language, at least not on 32bit systems.Given a script like this:

<?
$t1 = "1244431010010381771";
$t2 = "1244431010010381772";
if ($t1 == $t2) {
    print "equal\n";
}
?>

A current PHP version will output:

schoenfeld@homer ~ % php5 test.php
equal

It will do the right thing on 64bit systems (not claiming that the numbers are equal).Interesting enough: An equal-type-equality check (see my article from a few years ago) will not tell that the two numbers are equal.

Continue reading PHP and big numbers.

November 30, 2011

LDAP performance is poor..

Todays rant of the day:In a popular LDAP directory management tool, not to be named, there is a message indicating that the performance of the LDAP server is poor. While this might still be true: Honestly, building LDAP filters like you and then complaining about the LDAP server is like, lets say, searching papers in the whole city, while you know they are certainly located within a single drawer, in a single closet, in a single room of your apartment and blaming the city council because your search took so damn long.What a mockery.

September 17, 2011

Struggling with Advanced Format during a LVM to RAID migration

Recently I decided to invest in another harddisk for my atom system. That system, I built up almost two years ago, has become the central system in my home network, serving as a fileserver to host my personal data, some git repositories etc., streaming server and since I switched to a cable internet connection it also serves as a router/firewall.Originally, I bought that disk to backup some data, of the systems in the network, but I realized that all data on this system were hosted on a single 320GB 2,5" disk and it became clear to me, that, in absense of a proper backup strategy, I at least should provide some redundancy.

So I decided, once the disk was in place, that the whole system should move to a RAID1 over the two disks. Basically this is not that hard as it may seem at a first glance, but I had some problems due to a new sector size in some recent harddisks, which is called Advanced Format.

But lets begin from the start. The basic idea of such a migration is:

  1. Install mdadm with apt-get. Make sure to answer 'all' to the question which devices need to be activated in order to boot the system.

  2. Partition the new disk (almost) identical.Because the new drive is somewhat bigger that wouldn't make sense, but at least the two partitions which should be mirrored on the second disk, need to be identical.Usually this is achieved easily by using
    sfdisk -d /dev/sda | sfdisk /dev/second/sdb
    In this case, it wasn't that easy. But I will come to that in a minute.

  3. Change the type of the partitions to 'FD' (Linux RAID autodetect) with fdisk

  4. Erase evidence of an eventual old RAID from the partitions, which is probably pointless on a brand-new disk, but we want to be sure:
    mdadm --zero-superblock /dev/sdb1mdadm --zero-superblock /dev/sdb2
  5. Create two DEGRADED raid1 arrays from the partitions:
    mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 missingmdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 missing
  6. Create filesystem on the first raid device, which will become /boot.

  7. Mount that filesystem somewhere temporary and move the contents of /boot to it:
    mount /dev/md0 /mnt/somewhere
  8. Unmount /boot, edit fstab to mount /boot from /dev/md0 and re-mount /boot (from md0)

  9. Create mdadm configuration with mdadm and append it to /etc/mdadm/mdadm.conf:
    mdadm --examine --scan >> /etc/mdadm/mdadm.conf
  10. Update the initramfs and grub (no manual modification needed with grub2 on my system)and install grub into the MBR of the second disk.
    update-initramfs -uupdate-grubgrub-install /dev/sdb
  11. The first point to pray: Reboot the system to verify it can boot from the new /boot.

  12. Create a physical volume on /dev/md1:
    pvcreate /dev/md1
  13. Extend the volume group to contain that device:
    vgextend /dev/md1
  14. Move the whole volume group physically from the first disk to the degraded RAID:
    vgmove /dev/md1
    (Wait for it to complete... takes some time ;)

  15. Reduce first disk from the VG:
    vgreduce /dev/sda2
  16. Prepare it for addition to the RAID (see step 3 and 4) and add it:
    mdadm --add /dev/md0 /dev/sda1mdadm --add /dev/md1 /dev/sda2
  17. Hooray! Watch into /proc/mdstat. You should see that the RAID is recovering.

  18. When recovery is finished pray another time and hope that system is still booting with it running from the RAID entirely. If it does: Finished :-)

Now to the problem with the advanced format:There is some action taking place with the hardware vendors to move to a new sector size. Physically my new device has a size of 4096 bytes per sector. Somewhat different to the 512 bytes disks used to have the last decade.

Logically it still has 512 bytes per sector. As far as I understand this is achieved by placing 8 logical sectors into one physical sector, so when partitioning a new disk the alignment of the disk has to be so that partitions start in a sector which is a multiple of 8.

That, obviously, wasn't the case with the old partitioning on my first disk. So I had to manually create partitions by specifying start points manually and making sure they are dividable by 8.Otherwise fdisk would complain about the layout on the disk.This does not work with cfdisk, because it does not accept manual alignment parameters and unfortunately the partitions it creates do have a wrong alignment. So good old fdisk and some calculations how many sectors are needed and where to start, to the rescue.

So the layout is now:


Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      291154      144553+  fd  Linux raid autodetect
/dev/sdb2          291160   625139334   312424087+  fd  Linux raid autodetect
Continue reading Struggling with Advanced Format during a LVM to RAID migration.

April 28, 2011

On Debian discussions

In my article "Last time I've used network-manager" I made a claim for which I've been criticized by some people, including Stefano, our current (and just re-elected) DPL.  I said that a certain pattern, which showed up in a certain thread, were a prototype for discussions in the Debian surroundings.

Actually I have to commit, that this was a very generalizing statement, making my own point against the discussion point back directly at myself.
Because as Stefano said correctly there has been some progress in the Debian discussion cult.
Indeed, there are examples of threads, were discussions followed another scheme.
But to my own advocacy I have to say that such changes are like little plants (in the botanical sense). They take their time to grow and as long as they are so very new, they are very vulnerable to all small interruptions. Regardless of how tiny those interruptions may seem.

I've been following Debian discussions for 6 or 7 years. That scheme I was describing was that which had the most visibility of all Debian discussions. Almost every discussion which were important for a broader audience followed that scheme. It has a reason that Debian is famous for flamewars.
In a way its quiet similar to the network-manager perception, some people have. Negative impressions manifest themselves. Especially if they have years of time.
Positive impressions does not have a chance to manifest themselves as long as the progress is not visible enough to survive small interruptions.

I hope that I didn't cause to much damage with my comment, which got cited (context-less) on other sites. Hopefully the Debian discussion cult will improve further to a point where there is no difference between the examples of very good, constructive discussions we already have in some parts of the project and the project-wide decision-making-discussions which affect a broad audience and often led to flamewars.

Directory-dependent shell configuration with zsh (Update)

For a while I've been struggling with a little itch. I'm using my company notebook for company work and for Debian related stuff. Now, whenever I switch between those two contexts, I had to manually fix the environment configuration. This is mostly related to environment variables, because tools like dch et cetera rely on some, which need to be different for the different contexts, like DEBEMAIL.
A while ago I had the idea to use directory dependent configuration for that purpose, but I never found the time and mood to actually fix my itch.
Somewhere in the meanwhile I applied a quick hack ("case $PWD in ...; do export...; esac") to my zsh configuration to ease the pain, but it still did not feel right.


For the impatient: Below you find a way to just use whats described here. The rest of the article just contains detailed information on how to implement something like this.


The other day I were cleaning up and extending my zsh configuration and it came to my mind again. I then thought about what my requirements are and how I could solve it. First I thought about using a ready solution, like the one in the Grml Zsh configuration, but at that point I did not remember it (it needed a hint by a co-worker *after* I finished the first version of my solution). Then I came up with my requirements:

  • Separate profile changing logic from configuration (as far as possible):I don't want to re-dive into a script logic every time I decide to change something, like adding a variable or changing it. Generally I find a declarative approach much cleaner.
  • Avoid repeating myself
    Basically all I do when switching profiles is to change environment variables. Usually I don't want my shell to do extraordinary things, like brewing coffee when I switch the context, so I'd like to avoid typing an "export foobar.." for every single environment variable and every single profile.
It lead to a configuration approach as a first start. When thinking about how to represent the configuration I looked into the supported data types in zsh. zsh supports arrays, which is perfect for my need. I came up with something like that:
typeset -A EMAILS ENV_DEBIAN ENV_COMPANY
EMAILS=(
  "private"     "foo@bar.org"
  "company"     "baz@foo.org"
  "debian"      "schoenfeld@debian.org"
)
ENV_DEBIAN=(
  "DEBEMAIL"  "$EMAILS[debian]"
  "EMAIL"     "$EMAILS[debian]"
 )
ENV_COMPANY=(
  "DEBEMAIL"  "$EMAILS[company]"
)
The next part was selecting the right profile. In the first version I used the old case logic, but it was breaking my separate logic and configuration paradigm. Approximate at this point the co-worker lead me to the grml approach, which I borrowed an idea from:


# Configure profile mappings
zstyle ':chpwd:profiles:*company*' profile company
zstyle ':chpwd:profiles:*debian*' profile debian

and the following code to lookup profile based on $PWD:

1 function detect_env_profile {
2   local profile
3   zstyle -s ":chpwd:profiles:${PWD}" profile profile || profile='default'
4   profile=${(U)profile}
5   if [ "$profile" != "$ENV_PROFILE" ]; then
6   print "Switching to profile: $profile"
7   fi
8   ENV_PROFILE="$profile"
9 }

For an explanation: zstyle is a zsh-builtin which is used to "define and lookup styles", as the manpage says, or put different: Another way to store and lookup configuration values.
Its nice for my purpose, because it allows storing patterns instead of plain configuration values which can be compared against $PWD easily with all of the zsh globbing magic. This is basically whats done in line 3. zstyle then sets $profile to the matching zstyle configuration in the :chpwd:profiles: context or to 'default' if no matching zstyle is found.

The (almost) last part is putting it together with code to switch the profile:

1 function switch_environment_profiles {
2   detect_env_profile
3   config_key="ENV_$ENV_PROFILE"
4   for key value in ${(kvP)config_key}; do
5     export $key=$value
6   done
7}
The only non-obvious part in this are lines 3 and 4. Remember, the profiles were defined as ENV_PROFILE, where PROFILE is the name of the profile. We cannot know that key in advance, therefore we have to construct the right environment variable from the result of detect_env_profile. We do that in line 3 and lookup this environment variable in line 4.
The deciding aspect for that is the P-flag in the parameter expansion. It tells zsh that we do not want the value of $config_key, but instead the value of $WHATEVER_CONFIG_KEY_EXPANDS_TO.
The other flags k and v tell zsh that, from the array, we want both: keys and values. If we'd omitted those flags it would have given us the values only.
We then loop over that to configure the environment. Easy, hu?

We would be finished, if this would do anything. The code above needs to be called. Lucky for us thats pretty easy to achieve, as zsh has a hook for when a directory is changed. Making all this work is simply a matter of adding something like this:

function chpwd() {
  switch_environment_profiles
}
Now, one could say, that the solution in the grml configuration has an advantage. It allows calling arbitrary commands on profile changing, which might be useful to *unset* variables in a given profile or whatever you could think of.
Well, its a matter of three lines to extend the above code for that feature:


# Taken from grml zshrc, allow chpwd_profile_functions()
if (( ${+functions[chpwd_profile_$ENV_PROFILE]} )) ; then
  chpwd_profile_${ENV_PROFILE}
fi

to the end of switch_environment_profiles and now its possible to additionally add a function chpwd_profile_PROFILE which is called whenever the profile is changed to that profile.


USAGE: I have put the functions into a file which can be included into your zsh configuration, which can be found on github.
Please see this README and the comments in the file itself on further usage instructions.

April 19, 2011

password-gorilla 1.5.3.4 ACCEPTED into unstable

The password-gorilla package has lacked some love since a while and at some point in time I orphaned it.
That happened due to the fact, that the upstream author was pretty unresponsive and inactive and my own TCL skills are very limited. As a result password-gorilla package was in a bad state, at least from a user point of view, with several (apparently) random happening error message and alike, stalling feature development etc.

But in the meanwhile there was a promising event arising. A guy, named Zbigniew Diaczyszyn, wrote me a mail that he intended to continue upstream development. Well, meanwhile is kind of an understatement. That first mail already happened in December 2009. And he asked me, if I'd like to continue maintaining password-gorilla in Debian. I agreed to that, but as promising as it sounded to have a new upstream, I was not sure if that would work out. However: My doubt were not justified.

In the time between 2009 and now Zbigniew managed to become the official upstream (with the accreditation of the previous upstream), create a github project for it and make several releases.


I know there are several people out there who tested password-gorilla. I know there were magazine reviews including the old version, which were a bit buggy with recent tcl/tk versions. It made a quiet good multi-platform password manager, with support for very common password file formats, stand in a bad light.
I recommend previous users of password-gorilla to try the new version, which recently has been
uploaded to unstable.


April 15, 2011

Last time I've used network-manager..

Theres an ongoing thread on the Debian mailing lists about making network-manager installed by default on new Debian installations. I won't say much about the thread. Its just a prototype example for Debian project discussions: Discuss everything to death and if its dead discuss a little more. And - very important - always restate the same arguments as often as you can. Or if its not your own argument you restate, restate the arguments of others. Ending with 100 times stated the same argument. Even if its already disproved.

I don't have a strong opinion about the topic in itself. However there is something I find kinda funny. A statement brought up by the people who strongly oppose network-manager as a default.
A statetement I've heard so often that I can't count it anymore.

The last time I've tried network-manager it sucked.
It often comes in different masquerades, like:

  • network-manager is crap.
  • network-manager is totally unusable
  • network-manager does not even manage to keep the network connection during upgrades
But it basically boils down to that basic essence of the sentence I've written above. Sometimes I ask people who express this opinion a simple question:

When did you test network-manager the last time?
The answers are different but again the basic essence of the answers is mostly the same (even if people would never say it that way):

A long time ago. Must have been around Etch.
And guess what: There was a time when I had a similar opinion. Must have been around Etch.
During the life cycle of network-manager between Etch and now a lot has happened. I restarted using network-manager at some point of the Lenny development.
My daily driver for the management of my network connections on my notebook. Yes, together with ifupdown because, yes, network-manager does not support every possible network-setup with all of the special cases possible. But it supports auto-configuring of wired and wireless devices. Connecting to a new encrypted network, either in a WLAN or in a 802.1x LAN, using UMTS devices, using tethering with a smart phone. And everything: on a few mouse-clicks.

Yes, it had some rough edges in that life cycle. Yes, it had that nasty upgrade bug, which was very annoying.
But face it: It developed a lot. Here are some numbers:

Diffstat between the etch version and the lenny version:
 362 files changed, 36589 insertions(+), 36684 deletions(-)

Diffstat between the Lenny version and the current version in sid:
 763 files changed, 112713 insertions(+), 56361 deletions(-)

The upgrade bug has been solved recently. Late. But better late then never.

So what does that mean? It means that, if your last network-manager experience was made with Lenny or even worse around Etch, you should better give it another try, if you are interested in knowing what you talk about. For now it seems that a lot of people do not know. Not even in a distance.