When I was first learning how to use Linux, package managers were a godsend. I still remember the first time I saw someone use apt to download some needed software. Here was an OS that was free as in beer, free as in speech, and was backed by multiple mirrors of hundreds of software packages which will instantly configure themselves on your system. For a moment I felt like I was living in the future. In reality, I was living in a world where the dominance of Windows blinded me to the fact that the cutting edge of computing existed elsewhere, and it was awesome. On a more practical note, the ease of use of Debian packages made it much easier to set up a working Linux environment. Dare I say that if it weren't for package managers, I would have never considered Linux to be anything more than a curiosity.
Nowadays, I don't use packages that much. As the years went by, I found myself being forced to build various pieces of software from source. At first I simply pasted commands into a shell, but eventually I grew to understand what those commands meant, until I finally got to the point where I could grab a tarball and install it without any guidance.
Then something else happened. Building from source went from being an option, to being my preference. And it's becoming true more and more with every week.
My current view is that Package Managers are indispensable in at least two situations. The first is if you're a newbie, or if you use Linux for basic computing tasks. The second is if you are running a server in an enterprise environment, where your system needs to stay up to date, and simply cannot break. For someone who falls into these camps, I'd say you're crazy not to rely on packages as much as possible.
For developers and power users, however, I think the negatives of package managers become more severe. Namely, they introduce a lack of control and transparency to your system, leading to problems that can waste as much of your time as a finicky source bundle. For instance, what if you want a certain feature to be enabled, or disabled? With the precompiled package, you have no say. On the same token, what if your distro's package repository is slow to update to the latest version of a programming language? These concerns don't affect everyone, but when they do, it can be incredibly frustrating.
In regards to transparency, consider Synaptic Package Manager, which doesn't tell you what gets installed, or where it gets installed to, until the package is actually on your system. This is counterintuitive to me. The package is installed without asking you for a destination. That means it must know where it is supposed to go. Why can't I see that ahead of time?
Furthermore, I can't shake the feeling that the dependency lists for some packages could be leaner than they are. Far too many times have I installed what looked to be a simple package, only to find that it has to bring twenty friends along, some of which look completely unrelated. Back when I (admittedly foolishly) tried to run Ubuntu on five gigs of disk space, I'd fill it up in a blink, until I went in and uninstalled a half dozen kernel images and a list of libraries as long as my arm. This problem plays into a more general attitude among developers which assumes that disk space is trivial (for more examples, see Maven, Ivy, rubygems, and other programming-language related package managers). Even when it is, I don't want to fill my HDD with more data than it needs, nor do I want to spend time pruning it months from now.
For all these reasons, I've grown to rely on building from source almost exclusively. It lets me know exactly what's on my PC, and where it is located. For example, I find that it is easier for me to keep track of binaries if I install them to /opt, rather than spreading them out between /opt/, /usr/, and /usr/local. It also lets me install multiple versions of software, and change between them as needed. To be fair, there are pitfalls with this approach. Some things don't build nicely, especially on OS X. Thankfully, most of my pitfalls have managed to double as learning experiences, and I hope that the lessons learned will make future installations smoother.
But there's one thing about building from source which bothers me more than anything else. I'm starting to get fanatical about it. When I tried to get Ruby setup last month, I found myself increasingly frustrated with the community's insistence on using Macports or Homebrew to grab needed pieces. I felt allergic to the voodoo that the various Ruby management systems (such as RVM) practiced behind the scenes. I even chaffed at Google's insistence on making the pre-compiled version of Go a .pkg installer, which placed everything in /usr/local whether you like it or not. The way I saw it, the people who have an interest in installing Ruby don't fall into either of the two camps which benefit from Package Managers. Abstraction is good in programming, but abstraction at this level can lead to someone working with a tool that they don't really understand. If it ever happens to break, they would find themselves at a loss at what to do. As I mentioned before in regards to Ruby, convention over configuration can only go so far. Sooner or later, you need to look under the hood. Once you get over the initial hurdles, and your system is suddenly working exactly the way you want it, the feeling you get is nothing short of triumphant.
Again, I think my own stance is too strong. It perceives the world as too black and white, and it does not factor in any number of edge cases. Ultimately, it doesn't really matter whether or not you use package and roll your own, as long as you can do the stuff you want to do. It's not worth it to get worked up over these kinds of debates. Not when there are stupid projects to undertake, like building Linux from Scratch.
No comments:
Post a Comment