Linux Desktop Musings     Archive     Feed

The New Linux Packaging Paradigms and My Personal Fears

These are exciting times to be a Linux user. With systemd, the new desktop environments and finally with container-supported packaging we will witness a new paradigm of how the entire ecosystem works, for better or for worse. I have to admit that this idea is quite exciting: finally, we’ll be able to iron out the shortcomings of Linux packaging. No longer will it be a burden to get software “foo” which happens to bump half of your system’s libraries to newer versions and ends up breaking your system. We’ll be able to run a stable base, but have isolated newer packages that pull in new versions of dependencies as needed. What worries me in this entire process is that this might spell the end for distributions as we know them today.

One might ask the question why I think distros are important and what would happen if they just disappeared over night. To me, distributions have always meant that there were organisations which monitored how software functions beyond the upstream within specific areas of requirements. So, for a developer the piece of software might run just fine and it might do that for the handful of volunteers that do test upstream software. However, what ends up being used in distributions gets at least another set of eyes sifting through the bugs that might occur when software “foo” is used within environment “bar”, etc.

Let me cut out the cryptic theoretical musing and give you a real-life example: the KDE 4 series of releases. When KDE 4 came out, the developers issued a half-baked version as a 4.0 on purpose in order to encourage more testing by other community members. Some distros, like Opensuse, adopted KDE 4 early on – Kubuntu AFAIK did something similar soon thereafter. Upstream developers were hoping for more feedback from real life distribution applications of their software. They got it, yet I still don’t think this was a better move than just releasing the software as a development preview and have the distributions fix things within their own development branches. Why? Well, if there is a new upstream release, I think there are only so many users willing to take the plunge and be part of the experiment. Mostly, these are tech savvy computer enthusiasts, often with a spare machine or VM to test such things. The thing is: calling something 4.0 won’t make the proverbial grandmas, which use a browser and maybe a spreadsheet tool, test a new desktop environment. While you might have encouraged a few more people to try out 4.0, you could have done the same with just a PPA.

This reminds us that upstream Linux software usually comes with a very tolerant definition of the word “stable”. And while this may be sufficient for bleeding edge distros like Arch or Fedora, the average Mint or Ubuntu user just won’t bother with a distro that will break at least a bit. Traditionally using Debian, I’ve become the sort of user who is annoyed by the tiniest bit of breakage. For me Ubuntu LTS or a Debian Stable is the perfect distro, were it not for the sometimes outdated drivers.

To stay within the KDE example, I have come across many problems during the early KDE 4 releases: Email cache breaking between releases, completely unusable PIM stack, unusable Caligra office,… However, when I tested Debian Stable with 4.3 (Squeeze I think), things just worked great. Granted, upstream KDE was already at 4.6 when Squeeze was released, but at least I had a stable release within what I define as stable. This is why I want distros to stay, or at least those fine enthusiasts which take a snapshot of upstream and make a stable OS for us. I’m completely aware that having the quick ability to use the latest software and even various versions of it will lead to at least a bit more instability and this is probably the price we pay for it, but I’d hate for Linux to become just a huge collection of chaotic software packages with little quality control.