The holy grail for people with BOINC machines is a Linux distro which comes with BOINC pre-installed AND automatically installs the NVIDIA drivers so GPU projects can merrily crunch away. It also needs a tiny Web browser and usable text editor, but that is it. Every minimal BOINC distro project seems to die an agonizing death before shipping something usable. The problem is that much of Linux is written by 12-14 year old boys who are all about hacking out something fast and haven’t been to school to learn about the science of software development or the concept of software architecture.
Linux isn’t modular. In a nutshell that is the problem. Don’t confuse this with “doesn’t have modules” as it does. Linux isn’t modular because those modules tend to have hundreds, if not thousands of dependencies when it comes to building and running. If you don’t believe me sit down with someone and attempt to create a minimal Bit-bake of Linux for an embedded target which must have incredibly tight security because it is going into a medical device. If you are starting from scratch using an amazingly fast build machine and a good source control system it will take you roughly 8 months.
How did we get here? A minimalist vision. In the early days stuff was expensive. You could easily spend $5,000 on a desktop computer without even getting a good one. Years later clones came out in the sub $1000 range with 20 MEG hard drives and CGA monitors. This lead to an awful lot of “Linux on a floppy” work trying to run in an eye dropper of memory. Don’t confuse this with a geek exercise. The DOS-PC memory barrier of 640K was something everyone was trying to deal with. That’s right, K, not MB but K. You don’t think twice today about sending email attachments larger than 2MB but you had to jump through massive hoops to load an image requiring 2MB of RAM back in the day.
Initially under DOS we compiled and linked most everything into the binary executable file. A rudimentary set of functions and interrupt calls were provided by DOS but your application had to have everything else bound to it. We didn’t ship redistributable libraries or have a vast set of system services to pull from.
Later overlay linkers came along. RT-Link, Blinker, etc. Via various methods they created binaries which could swap parts of themselves into and out of the 640K world. There were EMS and XMS memory management schemes as well as swaps to disk.
Both the Unix/Linux world and the task switching GUI layered on top of DOS called Windows began using various dynamic linked library mechanisms and services. The original IBM PC architecture flaw creating a 384MB RAM hole for add on devices above the 640K memory barrier had become unsolvable any other way. Hardware had to catch up and the INTEL segment:offset memory addressing scheme had to be abandoned. At least continued support for a 16-bit version of it.
It is kind of sad the x86 design was chosen by IBM. The Motorola architecture chosen by Apple had linear memory addressing. Like IBM though, Apple created an architecture design flaw by reserving memory at the top of the address space for hardware instead of reserving the very beginning or bottom of memory as it is normally referred. (Please, no Big-Endian Little-Endian arguments about what is really top and bottom.) You could only go so far, then you had a hole.
Dancing around this hole caused many bastardizations to happen. Under the task switching GUI for DOS called Windows the big bastardization was requiring many/most DLLs to be placed in a SYSTEM DLL directory which had to be on DRIVE C. If you had a small boot drive and larger slower data drive, attempting to install on DRIVE D would still cause DRIVE C to run out of space because all of the DLLs _had_ to go on DRIVE C. I ran into so many people who trashed their machines this way.
The problem has only gotten exponentially worse with Linux and kids doing development. We are getting very near the point Linux itself will either implode or the embedded world will simply fork to a few semi-commercially maintained embedded versions. The rats nest of things which must come along for one single module to be used has become unmanageable.
I don’t remember the package but I do remember listening to the rant from a friend trying to shrink a baked version of Linux. Some module, supposedly written in C, needed by the application for the device used a function from some other link library which invoked a Java method. This one tiny module had to pull along the Mack truck of a Java VM with all of its supporting libraries just for one function call.
Now we have Raspberry Pi taking the embedded and semi-embedded world by storm. Hopefully the Raspbian desktop will pick up the torch. It appears they have begun to realize the problem because they added RISC OS for the Raspberry Pi.
If you haven’t guessed, trying to unwind the rats nest of dependencies to create a “minimal BOINC” distro is what causes those projects to fail. Had things actually been modular the “minimal BOINC” distros would have spun up over a long weekend.