Things that make you a good programmer

If you ever wondered if you are a good programmer (not), you might think about the following points:

1. Repeat yourself. How else would you keep yourself busy, if your customer has new requirements?

2. Re-Using code is for people who take the other side of the street if a big dog walks along. No risk, no fun. How else would you find out that the common idiom you use is really the way the job has to be done?

3. If you have a coding convention (e.g. how code has to be indented): Just ignore it. Its a good thing to have editors go crazy, when trying to automatically detect the indenting of a source file. By always confusing the editor you keep up the fun of the people, who try to change your code. It would be too boring for them, to simply edit the file, without the quiz which quoting applies to your code. Extra points for those who additionaly (to mixing tabs, spaces, 4 and 8 space, expandtab and no expandtab) write a vim modeline into their file that – by guarantee – does not match the indenting of the file.

4. If you can complicate things: Do it. Numeric indices in arrays can be exchanged by arbitrary strings, this makes your code more interesting. Especially if you have to move elements in that array. And complicated codes makes people, who don’t know it, think that you are a good programmer. One step nearer to your goal, isn’t it?

5. When working with different classes invent a system that auto-loads the classes you need. Don’t document it, that would be a risk for your job. Its not neccessary anyway, because good programmers like the riddle and finding out what gets called, where and why, is a simple but entertaining riddle. After all documenting is a bad idea as well. Especially if you make your system open source, because with a good documentation, it might be to easy for competitors to use your code to make money.

6. If you find ways to do extra function calls: Do it. It gives you the chance to refactor your code, if the customer notices that it is too slow. Great opportunity, hu?

(But, seriously: Don’t hear on me. Its just a cynical way to express my feelings.)

Building a 15W Debian GNU/Linux system

When the Intel Atom was revealed to the public I didn’t came around to say: „Wow!“, because that piece of hardware promises to be a generic-x86 1.6 GHZ CPU with a total power consumption of 2 Watt, which is amazing considering that x86 hardware generally wasn’t an option if you wanted to build a low-power system. But then the first chipsets were presented to the public and the Atom became a farce, because you don’t want to have a chipset that eats over 25W for a CPU which consumes 2W. That was basically laughable.

Recently I managed to find out that there is a new chipset out there, the Intel i945GSE, which runs at about 11W TDP, including the soldered-on-board N270 atom cpu. And I convinced myself that this could get my new homeserver. So together with a 2.5″ drive I could get a system which runs with about 15W maximum power consumption, which is amazing, given that the Arcor Easybox my provider gave to me seems to have similar maximum power consumptions. And it isn’t able to provide me with the great flexibility, the new Atom system is.

So I bought the following components:

  • Intel Essential Series D945GSEJT
  • A Mini-ITX M350 case, which is amazing, because its about the size of a Linksys router and should still provide a good thermal environment.
  • 2GB Kingston HyperX DDR2 533 MHZ S0 DIMM
  • a Western Digital Scorpio 320 GB hard-drive

It took a while to get those components together, especially because I previously decided for an Antec case which I ordered from K&M Elektronik, but as they didn’t keep their delivery promise I came to the M350. What a luck.

Running Debian on this machine is the easy part, you would think. This is true, for some exceptions. First: Lenny runs fine. I’ve installed the notebook hd in my desktop and then put it in the atom, when I got the first hardware and it worked right away. Except of a grub message, which is disturbing and which I didn’t manage to fix right now (grub says „Error: No such disk“ just to get the menu a seconds later anyway and boot the system flawless).

What didn‘ t work exactly reliable was the included network chip. Its quiet a shame to say that, but if you buy an Intel board, wouldn’t you expect that it would run Intel components? Unfortunately this is not true for the atom board. It has a Realtek RTL8111 network chip, which isn’t properly supported by the 2.6.26 kernel (that means the kernel think it is and loads a rtl8169 module, which isn’t able to properly detect a link).
The workaround for this is to use a 8168 module from Realtek and compile it for your kernel, but as I equipped this system with an Atheros 2424 PCIe chipset for playing WLAN AP, too, I had to upgrade to 2.6.31 anyway and there the chip is fully supported by 8169.

Making the system an access point has been surprisinly easy as well. The greatest pain was to find a Mini PCIe WLAN card, because after all this isn’t very common. However I found one based on an Atheros 2424 chipset and bought it. I additionaly bought an SMA-antenna connector that I could mount into the case (the M350 has a preparation hole for it) and an SMA antenna.
Setting this up, has been fairly easy. You need to know, that running master mode with newer mac-subsystem-drivers in Linux doesn’t allow setting master mode directly. Instead you need to use an application to manage everything, which is capable of running cards over netlink. Thats hostapd. The unfortune is, that the lenny version is too old and so I built myself a (hacky) backport of the sid version, which isn’t that hard anyway, because rebuilding against Lenny is enough. Additional you need a kernel 2.6.30 with compat-wireless extensions, or an 2.6.31, because previously the ath5k driver didn’t support the master mode. After that getting hostapd up is a matter of a 4 – 15 lines configuration file. For me its now running in 802.11g with WPA and a short rekeying interval with 14 lines of configuration.

After all I’m satisfied with the system. Without any fan the CPU constantly runs at 55°C, which is okay, given that it must operate within 0 and 90 °C according to the tech specs. The system and the disk are somewhat lower (47 and 39°C). The power of this system is more then enough. Its booting quick and working with it works without latencies, even when the system is doing something. What I haven’t yet tested is weither the power consumption actually fulfills the expactations. I will do so, once I got a wattmeter.

What an „Intel atom inside“ sticker could make of you

So I’ve got this cool Intel Atom-Board Intel Essential Series D945GSEJT, which is the first Atom-ITX-board that doesn’t feature a 25W chipset for a 2W CPU. Its pretty cool, but one thing made me laugh.
Today I saw that there came a „Intel atom inside“ sticker with the board. I thought: Well, nice, now I could – if I wanted – put this on my Mini-ITX-system. But then I saw whats been written next to this: „Use of the enclosed Intel Atom logo label is unauthorized and constitutes infringement of Intel’s exclusive trademark rights unless you have signed the Intel Atom logo trademark license„.

Isn’t it nice of Intel to supply me with a sticker which would make me a criminal if I’d use it?

Cool PHP-Code.

Did you know that in PHP you can write something like that:

$test = „foobar“;
$test = sTr_RePlace(„bar“, „baz“, $test);
$x = sPrinTf(„%s is strange.“, $test);

pRint $x . „n“;
eCho „foo“;

What makes me frighten is that this actually used, e.g. by this code snippet which exists (in a similar form) in an unnamed PHP-project:

echo sPrintF(_(„Bla bla bla: %s“), $bla))

And yes, they do use echo to output the result of sprintf.

Update: So i got this great comment. The commentator wants to point out that the senseless use of „echo sprintf“ is because of gettext. He says „That’s simply the way you use gettext.“ But this is simply not true. The difference between printf and sprintf is that the first one outputs the string, while the second one returns it. That means that in the above example printf could be used (instead of sprintf) without a useless echo call in front of it. The reason for using sprintf (and eventually the reason because you find it in a lot of applications using gettext) is that you can use it to fill a variable with the translated string or to use the string in-place. A common use-case for this is to handle a translated string to a template engine, for example.

PHP and the great „===“ operator

Lets suppose you’ve got an array with numeric indices:

[0] => ‚bla‘
[1] => ‚blub‘

Now you want to do something if the element ‚bla‘ is found in that array.
Well, you know that PHP has a function array_search, which returns the key(s) of the found values. Lets say, you write something like this:

if(array_search($array, ‚bla‘)) { do_something }

Would you expect that do_something would actually do something?
If yes: You are wrong.
If no: Great, you’ve understand some parts of PHP – insanitygoodness..

Actually, if ‚bla‘ wouldn’t have the index 0 it would work, because 1, 2, 3, 4, etc. is TRUE. But unfortunately PHP has some sort of implicit casting, which makes 0 behave like a FALSE, depending on the context. So following this, the if() works for all elements except 0.

You might be tempted to write

if(array_search… != FALSE)

But this wouldn’t help you, because 0 would still evaluate to FALSE, leading to if(FALSE != FALSE) which is (hopefully obviously) never true.

A PHP beginner (or even an intermediate, if he never stumbled across this case) might ask:
Whats the solution for this dilemma?

Luckily the PHP documentation is great. It tells you about this. And additionally PHP has got this great operator (===), which causes people to ask „WTF?“ when they hear about it for the first time. Additionaly to comparing the values to each other, they also check the type of the variable. This leads to the wanted result because 0 is an integer, while FALSE is Boolean. So the solution for our problem looks like this:

if (array_search($array, ‚bar‘) !== FALSE) {

Isn’t this great?

To the rescue, git is here!

Consider the following scenario:

You work on a project for a customer that is handled in a Subversion Repository.
The work you get comes in form of some tickets. Tickets may affect different parts
of the project, but they could also affect the same parts of the project. Testing of your work
is done by a different person. For some reasons its wanted that the fix for each ticket
is committed in only one commit in the subversion repository and only after it has been
tested. Commits for two tickets changing the same files should not be mixed.

Now: How do you avoid a mess when working with several patches that possibly affect the same files?

The answer is: git with git-svn. Currently my workflow looks the following:

  1. Create a branch for each ticket
  2. Make my changes for the ticket in this branch
  3. Create a patch from the changes in this branch and supply the tester with it
  4. Wait for feedback and eventually repeat steps 2-4
  5. When testing is finished, run git rebase -i master in the branch, now squash each commit into one commit, build a proper commit message from the template git provides.
  6. Switch to master branch and merge the changes from the branch.
  7. Rebase master against the latest svn (git svn rebase) and dcommit

That workflow works good for me. It gives me the possibility to do micro-commits as much as I want and still only commit a well-defined, well-tested commit into the projects SVN.
Just some minor drawbacks I haven’t yet solved (you know, git /can/ do everything, but that is actually a problem in itself):

  • Its a bit annoying that I need to use git rebase -i and change n-1 lines for a number of n commits, so that they are squashed. It would be handy to say: Squash all commits, that happened in this branch, into one.
  • Creating the diff for the tester requires me to do git log, search for the last commit before my first commit in that branch and use it with git diff to create a patch. I had a quick look at git-rev-parse and felt that is overwhelming complex to do find out how to do it better. To many possibilites. git is complex.

For now I cannot tell how good merging works, as soon as conflicts arise. But I guess git can do well, although there is the possibility that it could get complex again.

Nevertheless, its probably not a credit to git, but instead to DVCSes in common, but anyway. I like it.