xubuntu and my /var/storage

It was back in the mid-nineties, when I first received a Slackware 3 install CD. I somewhat recall the install process, the terror of wiping out the other OS from the single disk I had. Since then I tried many releases of RedHat and Fedora Core, until recently I turned to Xubuntu for old PCs.
I used Xubuntu also for my media server, which is a network-only small box filled with a 1Tb hd I keep beside my TV. This media server hosts my video and, most important, all my pictures. I have photographies back from 10 years ago, tons of photographies… when I stored (a downsized version of) them into my wife Digital Photo Frame, I counted 14000 pictures.
So, last Friday (Friday 17th, I am not superstitious, mind that) I had a couple of hours of free time and I decided to upgrade the XUbuntu version of my media server.
I saved main configuration files, a couple of packages I installed that were not part of the distribution. Since the linux is located on a CF card and the 1Tb disk is just data, I didn’t mind about backing it up. I mean I have a backup disk I sometimes sync with my actual data, but I didn’t feel the urge to do so.
I started the installation and selected to update the current installation.
After a while I heard my disk head tickling… and the fear rose. I went into a shell, checked for the mounted partition and found my data disk was mounted, issued a ls in the directory and found that more than half of the directories were gone.
Terrified I switched the box off, and turned to the other computer to search for an undelete.
What happened? My data disk was mounted under /var/storage. I choose this because this was a permanent presence in the file system and because under /var is where mysql and postgresql keep their database data. By analogy I thought it was a proper place for my media data base. Well I was wrong (or at least Xubuntu install program thought this was wrong), in fact Xubuntu upgrade wipes out the content of /var, maybe it preserves something, but nothing is asked.
Googling for the right words, I found the way to recover deleted files on a reiserfs partition. Information is a bit dated, but everything worked as expected. I guess I got back 90% of the original data. Recovered stuff was in /lost+found directory, some files had their real name, other files were renamed after their i-node number. Luckily being Picture, I could recover information from the Exif data.
Well… at the end it was just a big panic moment, but everything was sorted out. I have still maybe two/three evenings of work to put everything back in place.
As my esteemed colleague often says – if it nearly works that’s enough, do not change it or it breaks (in Italian sounds much better “Se tanto va che quasi basta, non toccar se no si guasta).

Fast boot!

One of my complains about Linux has always been the biblical boot time – endless disk churning, endless listing of obscure lines on the video. With the latest distros this aspect of the system has been improved, but still Linux lags behind Windows.
This work, with an impressive 5 seconds for the whole system up and running, points in the right direction. The authors claim that in order to achieve this result they just set a time budget for each subsystem to start. That is, according to them, aim to a well defined goal, rather than a generic “make it faster” statement.
I think that the winning move is not the fixed time budget, but a disenchanted, pragmatic look at the boot process with two strong propositions – 1) don’t let everyone pay for something that only few use and 2) do we really need to do it this way. I found very brilliant the approach taken with respect to GDM. Do we really need GDM and pay the full 2 seconds it takes to wake up? No, because we can start the last user session and lock it with the screensaver.

yum upgrade

Having a bit of spare time at work, I decided it was time to upgrade my Linux box to the latest version of my favorite distro: Fedora Core 9. KDE4 in FC9 holds a very slick and professional look whereas KDE3 on FC8 looked quite hobbyist.Rather than going the old proved way, I decided to try the yum upgrade method. By using yum upgrade, you don’t need to download an iso image, burn it and boot from the DVD. Instead you just let yum downloading all the new packages while you can continue working… at least for a while.
Everything starts by yum’installing the preupgrade package:

This is an handy GUI tool that helps you throughout the upgrading process. Just go through the wizard and let it download what it needs. Well apparently not everyone had such a smooth experience, if something fails you may want to look up how to do it manually.
Preupgrade ends its activity by changing the /boot/grub/menu.lst so that next boot will load the anaconda system configuration tool to perform the latest setups.
In my case it didn’t fully worked – I had to change manually the default property setting it to point to the preupgrade boot entry.
The next step has to be performed at the computer console, since it requires a reboot. You will go through the usual anaconda Linux setup, again follows the wizard path and pray your favorite divinity.
At the end of the configuration another wealth of bytes are downloaded from internet for quite a long time. During the download I got an error (likely I didn’t pray enough my favorite divinity) – the system was unable to found the kernel package. Luckily enough the system had the network configured and I could log in another console. At this point I looked for the specified file (kernel-2.6.25-14.fc9.i686.rpm) and downloaded it via wget.
To find where to put the rpm file, I just issued a system-wide:

And I put the kernel where most rpm files were. Then I got back to the GUI and hit the “Retry” button. Anaconda looked satisfied and continued to download stuff.
After downloading the universe and a couple of parallel ones, anaconda told me that it was configuring and it could take a couple of minutes. Likely because of the mass of the download some time warp occurred and anaconda kept crunching for a couple of hours of my biological time. Eventually it asked for rebooting the machine.
At the next reboot I got a black screen with a laconic grub prompt. Apparently all this procedure has something against the grub loader. Well, if it happens, it is not hard to get out – just use the tab key for completion of both commands and files. You have to set the kernel file, the root and eventually boot the system.

At this point you should be greeted with the standard FC9 boot sequence that ends in the login manager.
Everything’s fine? Well not quite. I switched to FC9 mainly for KDE4, so I was quite disappointed to find myself in gnome desktop environment.
After yum’installing the switchdesktop utility and run it, nothing happened. I had to manually create the file

with the single line:

Then, as root, I did:

Et voilà, the new system was, not only up and running, but tailored to my taste.

Fedora Core 8 audio

Fedora Core is supposed to be “bleeding edge” technology. What you may not expect is that it puts the edge while you provide the bleeding. Suppose you want to write an audio application, something simple, gathering audio from a microphone and feeding audio to a speaker. How could it be a nightmare?
Quite easily it is right from the beginning. How many audio API does linux have? Two? Three? Well, I counted seven – OSS, alsa, jack, esd, pulseaudio, openAL, gstreamer. You may argue that not all these APIs are on the same abstraction level, yes, I agree, nonetheless you have to decide what to pick and hope that the API you picked is widespread enough among your target customers.
I had some legacy code that relied on OSS, so I stuck with that. Knowing that Alsa was the answer to OSS, I tried it out and I left wondering which was the question. Alsa is significantly more complex to program than OSS, the bare minimum playback code is longer for ALSA than OSS.
So I gave up with ALSA.
Next problem is when I tried my legacy code on the brand new Fedora Core 8. The microphone seems not to work. I tried the command line OSS test with:

just to discover that no audio was collected.
After some twiddling and after having tried the canned solutions I found without success, I decided that I need another distro to test my code.
This time too, as soon as I find a solution there is a new problem ready for it. Fedora Core 8 uses logical volumes, so it is not straightforward to shrink them from say Debian install. In order to resize a logical volume, first you have to unmount the partition in it, resize the partition, and eventually shrink the logical volume.
You can’t do this on the default install in a running system because there is just a single partition with the whole system (but “/boot”). You can’t do either from the Debian install because “etch” doesn’t manage logical volumes.
Third approach I went for another PC-like device we are developing for a customer. Before doing anything, I tested the OSS driver on this box and it worked, so I downloaded my code and … “error: missing libSDL”.
Sooner or later I will master this adversity, I’m sure, I just hope not to bleed too much in process.

Back from Sciliar

I had great days on the Alpe di Siusi (Alp of Siusi) or SeiserAlm as those who live there call their home. It is a lovely and smooth plateau at around 1800m, braced by imposing Dolomite pikes. Sasso Piatto (“Flat Stone” a sort of understatement) bounds the East side, while Denti di Terra Rossa (“Red Soil Teeth”) bound the South Side and ends with the tooth shaped Sciliar pike.We had sun for nearly seven days and, despite of the warm winter, snow was enough to ski.
Alas, in order to appreciate great things, we have to compare them with the grey, dull industrial landscape of Castellanza, that’s why (I guess) I’m back home and at work.
The first interesting surprise hitting me at work has been the anticipation of the milestone I was working for. Our customer product has been selected for a design prize, so we are expected to deliver the working product earlier. Anyway we’re working hard, against time and hardware shortage to hit the milestone nonetheless.
At home, Santa (in the person of my wife) gave me a Xbox 360 and I started playing a not-so-Xmas-spirit game: Gears of War. I’m about the first boss and I should say that it’s great. From the technical viewpoint I think this is one of the first real next gen game. It runs on the Unreal 3 engine and the look is as detailed as awesome. The gameplay is based on taking cover, i.e. as soon as enemies are encountered you should take cover or you get badly shot. This is somewhat different from the classical shooter where the player drives a Rambo-like bullet-proof character (well, in Serious Sam, this was intended). The first boss is a chasing game play – run away from the monster, let him smash the doors for you and eventually take him off. Great.
While I was so fresh from the holidays and relaxed from Gears of War, I decided to update my notebook to the latest linux available. I gave a brief look to Sabayon Linux, only to discover that it behaves badly with the Toshiba touch pad and apparently has no support for my wireless adapter (I can’t believe that today distros still do not support the Centrino wireless adapter that is so widespread and at least two years old). So I turned to what I know quite well – Fedora Core 6.
I opted for the upgrade option instead of the install. Years ago I was used to upgrade, only to find that the system resulted in something that wasn’t completely new nor old and often was prone to glitches. A friend of mine suggested me to never upgrade, rather to backup the /home directory, install and restore it. This time I was so light from the holidays that I decided that an upgrade could do.
Well, I was wrong.
Yes I got a sort of FC6 tailored on my previous FC4 installation, and, yes, the wireless adapter sorta worked. But I could only browse the google website. No matter how I set the firewall/SELinux properties, there was no way to browse the rest of Internet. But this is another story.

Frustrated programmers on Linux

Life is hard. Especially in the working hours. The more hard the more you have to do your job on linux for linux. If you think about it, that’s odd. Back in the Days, Unix was the result of a young team that sought a programming (and hacking) environment. At times they had very programmers-unfriendly environment and Unix was very successful in this respect – text editors, interpreters, advanced shells, programming tools and the like flourished in the Unix filesystems.
Today is like those days… well it is still like that, in the sense that the rest of the world, most notably Microsoft, caught up and overtook the Unix command line.
First, suppose you want a project aware C/C++ editor. In the Windows world, maybe you have not much choice, but the answer is easy, the latest Visual Studio is quite good, allowing you to do nearly everything in a comfortable way. Linux is lagging behind, there is vim, emacs and Eclipse. Eclipse is indeed good (considered the price), but its C/C++ editing capabilities are far inferior to the basic Visual Studio. Maybe you can get it working in the same way, but this requires a great effort for those that are not fluent in this developing environment.
Suppose now that you want to interface your application with audio. If you use the basic operating system functionalities (likely OSS) you can do it rather quickly at the price of audio exclusivity. If your application is running no one else can access it.
This is known problem and has a known solution – using a sound daemon that perform mixing from multiple audio applications. This is reasonable.
What is unreasonable is that Linux sports a wide variety of such deamons, everyone has his own. What is yet more unreasonable is that both Redhat/Fedora and Ubuntu use the eSound daemon that has no documentation.
So you are forced to not have a standard choice and, what is worse, the choice you are forced to has no documentation whatsoever.
Frustrating, isn’t it?

Selected power

It is alway a good surprise when a task you thought being rather hard turns out to be quite easy when you actually do it. Using select for I/O multiplexing and stuff like this is one of those pleasant surprise. The man page could be quite intimidating, therefore I start with an example. Suppose you are dealing with network communication (or any other form of interaction where an I/O operation could take too long to be correct). You are likely to read (or write) into a file descriptor (previously opened via socket and then bound in some way) AND to check for a timeout. If the operation is taking too long, you want to bail out of the read operation and perform the needed action.If you are stuck with standard read and timer operations you may need to set up some signaling check for the right thread to catch them and so on. But there is a better way.
Select accepts several arguments: a limit, three sets of file descriptors and a timeout, and returns as soon as one of the conditions (defined by arguments) is met. The file descriptor sets are defined via fd_set type (handled with fellow macros FD_SET, FD_ISSET, FD_ZERO and FD_CLR). All these arguments can either be NULL or point to a fd_set. The first one is the set of file descriptors checked for non blocking read. That is that if one of the file descriptors contained in this sets become ready to be read without blocking the caller, then select returns. The next argument is for writable file descriptors and the third one is filedescriptors that have to be checked for errors (exceptions).
The first argument is the maximum filedescriptor contained in the union of the three sets plus one. This serves as a limit to avoid checking the whole range of file descriptor.
The last argument is a timeout. It is a struct timeval (the same filled in by gettimeofday) that can define timeouts with a microsecond resolution. In practice the resolution is much less fine grained than that and depends on the kernel and the architecture. For example on Linux kernel 2.4 on ARM the resolution is 10ms. Better check the smallest handled timeout before blindly relying on it.
Select returns -1 in case of error, 0 in case of timeout or the number of the filedescriptors that changed status in the three sets.
For the example the return code is easily processed, while for more convoluted cases could be more complex.
Let’s take another example, suppose you are reading audio packets from a stream and you want to decode and playback them. The first approach to this problem could be using two threads with a coupling buffer. One thread reads packets and pushes them into the buffer and the other thread pops the packets out of the buffer and sends them to the audio driver. This is conceptually simple, but not straightforward to do in the right way. When dealing with threading you always have to synchronize them. It is likely that you need a third thread to control the streamer and the player threads.
If you employ select the solution is very simple and natural. Just check the wall clock and compute a timeout for the next play, then wait with select either for a new incoming packet or the time to play.
In this case there is just one thread and the warrant that if you are reading the buffer no one is writing in it. This allows you to simplify the buffer management.
If you are not so lucky to work with Linux, but your daunting task is to earn a living with Windows the good news is that a similar function is available for Microsoft platforms.

Two computers are better than one

So you are at the office and you badly need Linux as badly as Windows. Linux hosts the cross compiler and the tool chain for your project, while Windows keeps you in synch with the rest of the company.Dual boot is, of course, the way NOT to go. From my experience you end by using just one system. Beside that Fedora Core 5 support for NTFS is, to put it mildly, rough. You can read NTFS partition, but you cannot write, despite all the claims you find on Linux-Ntfs website. If you are incoscien… ehm, brave enough you can even try a captive NTFS driver that is supposed to wrap windows NTFS drivers. These drivers are pre-built for Fedora Core 4 and I wasn’t able to get things working on FC5. I actually have some work to do rather than playing with the OS.
Another option I haven’t tried (yet), but that friends tell me is working well is to use VMware. This is a system virtualization software that runs below one or more operating systems letting them believe they are alone.
Anyway having two PCs is always preferred, it is just annoying that you need two keyboards, two mice, and it becomes quite difficult at copy’n’pasting between the two.
Until this morning.
This morning, while sneaking over Freshmeat I found x2vnc a nice hack that, using the VNC system allows you to share the same keyboard and mouse with two PCs, one running Linux and the other running Windows, as it would be one dual headed system.
Now when I move the mouse pointer over the left edge of the Linux machine, the pointer “flows” into the Windows screen. Move it over the right edge of the Windows screen … et voila, it’s back to Linux. It even works fine when the Windows system is at the login screen (i.e. no user has yet logged in). It is so sweet that Copy’n’Paste works, too.
Great Hack.

Fedora Core 5 – ready for the masses?

It is now somewhat more than a month that I’m using Linux Fedora Core 5 all day long. I have never used Linux so extensively at workplace (maybe with the exception of some periods when I worked at ABB, but then it was RedHat 4.2). How the system is doing when compared with, let’s say, Windows XP? Well it’s doing quite well. I mean that for a naive user everything should work as expected with minor annoyances. Quite unexpected the Windows power user could have some not so pleasant surprises. For example the default browsing system is a real pain in the neck. Every time you double click a folder a new windows pops up with that folder content, leaving the old window to clutter the desk. Moreover there is no way to turn this kind of windows (I reckon it is named “spatial browser”) into more conventional browser with their sidebar of directory tree. Every Linux enthusiast will advocate that you can turn off the spatial browser sticking back to what I would call the standard file system browsing window. It’s true, you just have to twiddle with some registry key. Well, yes, after all the criticism against Windows registry, Linux has one registry, too. It’s not something new in FC5, I think that this dates back around RH9. We can argue that is somewhat more friendly than the Windows one, yet, strictly speaking it is a registry.
After some time and frustration you can get used to right click on directory and select “Browse” in order to get the standard browser. It is not too close to the windows file browser (it can show files in the directory tree, and the navigation is not always intuitive) anyway could do.
Now, that you have a browser, maybe you are looking for a specific file. Windows users will find a “Search” (or “Find”, I’m not sure about the english text) item in the context sensitive menu of directory and drive objects. When turning to linux the same users will have quite an hard time to locate the same command. Being a long time unix user I find faster to open a terminal and cast a ‘find’ invocation. In the same way the context sensitive menu in the browse area doesn’t contain a “New folder” option, you have to right click in the directory tree sidebar.
When using windows, you can create a shortcut that points to anything you can browse in the file browser. That’s quite intuitive, I would dare to say that they have copied the Unix concept of soft link. Double clicking on the shortcut it is the same as double clicking on the object pointed to by the shortcut. In Linux you have basically the same tool, but it doesn’t work if you link a file in the network. I don’t know exactly what the problem is, I think that not every application recognizes the network file names (that look something like smb://host/path or cifs://host/path). If you want to do something like that, you need to be root and mount the network server into your filesystem and then create a local link.
While talking about files, the Windows user expects to share her/his files on the network. It is useful. After all what would be the purpose of the network if not sharing files? Under Linux the normal user is prevented to create a share where she/he wants to be. If you browse the documentation long enough, you’ll discover that you can create a directory named Public in your home directory that can be shared. First root must enable this feature, then the user has no way to control the access in this share. With Windows it is straightforward to set the share permission so that only some users or computers can read or write.
Let’s quit playing with and do some real work. Fire up openoffice calc … wait while the application fully ignite… then start working. FC5 is much faster than previous FC, but it is still slower than Windows when it comes to launch office applications. Now insert some data, formulas and create a pretty diagram. The average spreadsheet user shouldn’t complain for lacking of functionalities. Now that you have such a beautiful diagram you want to show it to someone, possibly your boss. Easy. Select the diagram, copy it, create a new html/rtf mail message in Evolution and paste it… and stare at the empty message. There is no way to do it. You can quite confidently copy’n’paste plain text, rendered HTML works fine too, but images don’t. The only way I found to do this is to export the diagram in some standard file (EPS or SVG) and then import it in the mail message.
It is time to talk about system stability. The matter is very delicate, there are a number of factors (broken or nearly broken hardware, broken software or messed up configurations) that can bring a system to its knees. I remember the first times in UbiSoft when my Win2000 blue screened every day. Eventually it turned out it was a problem with bugged video drivers. I had quite busy Linux box running without a hitch 24h a day for nearly a year. So this talk is to be taken with a grain of salt. Well I found FC5 not very stable. I have either to log out, or to power cycle it about once in two working days. It is true that I had to compile my own wi-fi USB dongle drivers, on the other hand it is true that the dongle has Windows drivers. In a perfect world I wouldn’t be required to compile my drivers. The weak component seems to be the user interface that sometimes loses interest for the world in general and for my input in the specific case. But I had also the keyboard stopping to work (no caps/num lock toggling), the system freeze on USB dongle removal and so on.
Ok, you get what you paid for, and to be honest, with FC5 you get a lot more than you paid for. Anyway, despite the long way it walked from slackware days, Linux has still some road to go to catch up Windows in everyday usage.
The trick of the day is “How to have lot of screensavers in FC5”. You are a geek (otherwise you wouldn’t be using linux), so screensavers are an indication of your geekness. The more the geeker. You would be rather disgruntled to find that FC5 comes, by default, with 5 dumb and dull screensavers. Searching the rpm package list, you find the good old xscreensaver. You install them, but… you still are stuck with the 5 boring ones. You have to convert xscreensavers, with the following commands:

Since I’ve been really kind to Microsoft, I think it is fine to rebalance the post, therefore the link of the day is John Dvorak’s Eight Signs MS is Dead in the Water a pointer to an article that expresses some concerns about Microsoft future.

Adventure in cross compiling land

If you have a C background you may find correct and appropriate that filenames are case sensitive. After all ‘a’ is different from ‘A’ as much as 65 is different from 97. If you have a BASIC background you may equally find correct and appropriate that filenames are case insensitive. It doesn’t matter how you write it, it is always the same character.
Every approach has its strengths and good arguments. The problem arise when one of the parties blindly ignores the rest of the world.
Well this may sounds like gibberish until you try to compile the linux kernel on windows. In this impressive set of source there are a bunch of files in the same directory that have the same name case-insensitive-wise. Moreover, if you untar the sources you get no warning sign since the tar command silently overwrites existing files. You get some hint of the problem when you copy sources with the windows GUI from a samba share to you local disk.
The problem is not so hard indeed, but if you don’t want to tamper source file structure it is better to rely on a case sensitive filesystem. That means that you cannot use Windows.
So I switched to Linux. After all FreeScale gives you the ready made cross compiler for Linux, so it is a clear sign of the way to go.
At workplace I installed the recently released Fedora Core 5. (Yes, I just completed a satisfactory configuration of the FC4 on my laptop 🙁 ). I must admit that Fedora Core, at last, has an original look. And a good look. Apparently they stopped following Redmond and Cupertino and started to set their own trend. Well done, I like it.
On the other hand having FC5 seamlessly working in a Windows environment is not something for the faint hearted. Be prepare to a good amount of swearing before you can resolve names, browse windows shares and access files there contained. Once done it works very well.
Back to Linux kernel compilation. Despite of what you may think by a look at my desk, I like cleanness and order, at least in files. So I want to store in the versioning system nothing that can be derived, just primary sources. With Kernel source this is a little fuzzy. In fact you download sources that are ‘distclean’ed. I.e. no kernel configuration is set. First you configure the kernel either by issuing one of ‘make menuconfig’, ‘make oldconfig’, ‘make xconfig’ or by picking up one of the precanned configuration by typing something like ‘make xyz_config’.
Anyone of these performs some operations – a file ‘.config’ is placed in the kernel source root, some symlinks and some files are created. So far so good. A file named ‘autoconf.h’ is required in order to build the kernel. This file is just the C version of the ‘.config’ file. In other words and simplifying a bit, in ‘.config’ you find something like ‘VALUE=y’ and in ‘autoconf.h’ you find ‘#define VALUE 1’.
Now I would expect that ‘autoconf.h’ is created from ‘.config’ somewhere in the Makefile. This is not true. The only way to create ‘autoconf.h’ (I found so far) is to interact with the system using one of the ‘make *config’. This is bad to me, since it prevents a fully automatic build.
On the other hand it is true that the configuration changes quite seldom, so you don’t have to ‘make distclean’ a lot and the ‘.config’ file you generate will be with you for long.
Maybe I didn’t search enough, but I have to do some actual work and produce some results beside googling for answers trying to bend universe to my personal view.