Linux Distros That Suck at Multiple Hard Drives

Some Linux distros really suck at dealing with multiple hard drives. Too many “maintainers” only have a laptop.


You need a wee bit of background before we jump in. Hopefully you can see the featured image. Recently picked up this Lenovo M93p ThinkCentre from eBay. I specifically bought an M93p instead of M83 because I wanted two hard drives. I had a 480 GB SSD I wanted to transfer the Windows 10 over to and I had a 6TB Western Digital Black I wanted to use for the other operating systems.

Why did I buy this particular M93p?

Lenovo M93p Ports

I actually added the PS/2 ports today. The little cable showed up to do that. It already had both serial ports, wifi, and the NVIDIA add-on video card. If your eyes are real good you will notice that on the other side of that Wifi antenna is a parallel port.

Software engineers need a lot of ports. If book sales start picking up I may even have to break down and buy another dot matrix printer to print shipping labels with. Yes, parallel port dot matrix printers are still made. You can buy them from today. There are lots of legal requirements to print with impact printers on multi-part forms in various shipping and transport industries. They also do a more economic and reliable job on mailing labels . . . if you buy the right one . . . and you have the proper printer stand.

Printer stand back

The best ones from days of old have both a center feed slot and a rear feed slot to accommodate either type of printer. Long time readers of this blog will remember I started work on a Qt and USB series and then life got in the way. That was all USB serial ports talking to real serial ports. My Raspberry Qt series also involved quite a bit of serial port work. My How Far We’ve Come series also involved quite a bit of serial port stuff as well.

Putting it mildly, I still do a fair bit of serial port work from time to time. If I get done with RedDiamond and RedBug without life getting in the way I’m going to start a new post series using CopperSpice and serial ports. The makers of Qt have honked off their installed base with the new “subscription licensing” for Qt 6.x and beyond. Even more honkable, if that is possible, is the chatter that they are trying to license the OpenSource QtCreator as well. Yeah, people are making a hasty exit from the Qt world and many are headed to CopperSpice.

Sadly Needed Windows

Unlike every other machine in this office, I needed to have Windows on this machine. There is some stuff coming up that will require it. There is no way in Hell I was going to try writing my serial port code using Linux in a VM. I may edit it there, but testing is a completely different story.

You’ve never spent days trying to track down why some characters don’t get through. Worse yet, the serial port just “stops working.” After you do a bunch of digging you find that someone baked in some super secret control strings to do special things in the interface driver of the VM. Nothing nefarious. Usually to support “remoting in” via cable connection.

Boot Managers

In the days of DOS and GUI DOS that Microsoft insisted on calling Windows, this was no big deal. BootMagic and about a dozen other competitors existed to help Noobies and seasoned pros alike install multiple operating systems onto the same computer. Honestly, I can’t even remember all of the different products that had a brief life helping with this very task.

OS/2 had Boot Manager backed in. Those of us needing to develop for multiple operating systems usually ran OS/2 as our primary. It just made life so much easier.

Early floppy based Linux distributions came with Lilo. It was generally pretty good at realizing Linux wasn’t going to be on the primary disk. SCSI controllers could support six drives and distributions were different enough you had to boot and build on each.


Later many distros went with Grub. To this day Grub has issues. The biggest issue is that each Linux distro adopts new versions of Grub at their own pace and Grub has a bit of history when it comes to releasing incompatible versions.

Adding insult to injury is the fact many Linux distros like to hide files Grub needs in different places. When you run your distros version of “update-grub” (as it is called in Ubuntu) it has to be a real good guesser when it wants to add a Grub menu line for a different distro.

Your second fatal injury happens during updates. Say you have an RPM based distro but have Ubuntu as the primary Grub OS. When your RPM based distro updates and changes the boot options for its own Grub menu entry in its own little world it has no way of informing the Grub that is actually going to attempt booting. Sometimes an “update-grub” will fix it and sometimes it won’t. A bit heavier on won’t that will.

Drives got too big

That’s the real problem. During the SCSI days when 80MEG was a whopper we put each OS on its own disk and just changed the boot device. That was our “boot manager.” Every OS existed in its own little universe.

As drives got bigger various “boot manager” type products could play games with MBR based partitions. Only one partition could be “active” so a tiny little boot manager got stuff into the MBR and it changed the active partition to match the requested OS.

Cheap but effective trick as long as you didn’t need more than four partitions. Only a primary partition could be flagged for active booting. Lilo and the other Linux boot managers started allowing Linux distros to boot from Extended partitions.

Today we have GPT and UEFI

I’m not intimate with how these work. The Unified Extensible Firmware Interface (UEFI) created the spec for GUID Partition Table (GPT). {A GUID is a Globally Unique Identifier for those who don’t know. That’s really more than you need to know.}

Theoretically we can have an unlimited number of partitions but Microsoft and Windows have capped support at 128. The UEFI should be replacing Grub, Lilo, and all of these other “boot manager” type techniques.

We shouldn’t have all of these problems

As you install each OS it should obtain its partition GUID then find the boot device and locate the UEFI partition on it. Then it should look for a matching GUID to update and if not found, create an entry. There is a spec so every entry should be following the same rules.

(If you read up on the OS/2 boot manager you will see that from the 10,000 foot level UEFI and the OS/2 boot manager conceptually have a lot in common.)

When any computer boots from UEFI and there are multiple operating systems in the UEFI partition, UEFI should show the menu and let the user select. This should all be in hardware and firmware now. We shouldn’t have Microsoft trying to lock us into their buggy insecure OS and Linux distros shouldn’t be trying to ham-fist Grub into UEFI.

The Split

I wanted all Linux distros to boot from the 6TB drive. I wanted Windows and UEFI to stay on the tiny SSD. This isn’t unreasonable. As all of the background should tell you, I’ve been doing things like this for decades. I did not want to try and stuff everything on the 6TB.

Each Linux distro would get 500 GB – 800 GB depending on how much I thought I would be doing in them. This means I should be able to put up to 12 different distros on the drive.

That may sound like a lot, but it’s not. You’ve never written code that worked perfectly on a Ubuntu LTS and failed rather bad on some of the YABUs supposedly using that LTS as their base . . . I have. The only way to know things for certain is to have a bunch of test systems. When you are testing serial port (or other device stuff) you need to be running on hardware, not in a VM.

Manjaro was the first failure

Manjaro kernel 5.9.16-1 was actually a double failure. I have this distro running on a pair of machines, but it is the only OS on them. Rather like what they’ve done with the KDE desktop. I rather hate the fact PostgreSQL cannot access the /tmp directory bulk import to restore a database doesn’t work on that platform. There are a few other odd Manjaro bugs as well.

I wanted to do some pacman packaging and some testing of the future serial port code in CopperSpice on Manjaro so it was first on the list. It booted fast and seemed to install clean. Rebooted the computer and boom, Windows came up. Navigated to the Advanced Settings under Settings in Control panel and tried to switch the boot OS. Boom! Windows is the only entry.


Let’s Install Ubuntu!

I had real dread when I reached for Ubuntu. That installer has had a lot of assumptions baked into it over the years. I was pleasantly surprised and slightly disturbed.

Installation went smooth and when I rebooted I was greeted with a Grub menu. Both Windows and Manjaro were on the Grub menu, but, should we really be seeing Grub on a UEFI system with multiple operating systems? Shouldn’t there be a UEFI menu that just has an entry for Ubuntu and when you select Ubuntu shouldn’t that be when you see a Ubuntu Grub menu?

Let’s See if Manjaro Boots Now!

Once I verified Ubuntu could boot and apply updates I rebooted and selected Manjaro. That’s as far as you get. The Lenovo logo stays on the screen and nothing else happens. HP owners have the same problem according to Reddit.

Fedora 33 Was Next

The Fedora installer was the worst of the lot. If you chose the second drive via one of the manual methods, it looked for a UEFI partition on that drive. It wasn’t smart enough to determine what the boot device was and go look there. You couldn’t get out of the screen either. There was no back or cancel, you had to power down.


Manjaro at least tried to install. It failed to create anything in the UEFI partition of the boot disk and it failed to show any error with respect to UEFI creation failure. It refuses to boot from the entry Ubuntu created for it in Grub. Double failure. I suspect this is due to a combination of super secret stuff needed on the menu entry, Manjaro using a different version of Grub, and Manjaro potentially hiding the files in a place Ubuntu doesn’t know to look.

Fedora failed to get out of the starting blocks. That graphical installer needs a whole lot of work!

Ubuntu worked despite my expectations of abject failure.

Just because Ubuntu worked doesn’t mean every YABU will. Most tend to write their own installers. If the developer working on the installer only has a laptop, they are going to take unreasonable shortcuts.

Related posts:

Fedora 33 Black Screen Again

How to Install PostgreSQL on Fedora 33

Fedora 32 – Black Screen After Login

Hackers Aren’t Your Biggest Danger – It’s the Low Wages You Pay Your Cleaning Crew and the Power Plug

Everybody thinks the power plug story is an Urban Legend or IT myth. When I answered this question about the strangest “computer bug” I had ever encountered, one of the stories I told was the power plug story. If you are old and been in IT a long time, you have encountered it.

I talk quite a bit about this in my latest book covering the history of IT and why things are the way they are. I’m sure many fingers will point at the low wage worker but the real problem is Disposable Management.

Power Plug Background

For the non-technical readers and non-Quora members, please allow me to re-use part of my Quora post to explain this.

computer room with a VAX midrange computer

In the early days of my career DEC midrange computer rooms looked much like this. There was a raised floor for air conditioning and cabling, some stuff hung on the walls. Gray or brown cabinets along one wall, black book shelves, and some tape racks. The rest of the room was the computers and tape drives.

All of this equipment used power plugs like this one.

Twist lock plug

I can’t even find an image of the deep gray metal outlet conduit. There used to be one track on the wall. Some of the larger rooms would have two tracks on the wall and one on the floor immediately behind the computers. Yeah, behind the equipment was just a mess. Cable everywhere. Sometimes they did it right and put the power under the floor, at least I’m told they did. I never worked at such a place.

The conduit was something like 4-6 inches deep because it had to hold the receptacle for these big plugs. They were anywhere from 20-60 AMPS depending on the equipment. Lots of heavy wire in those things.

Standard outlet

You would only have one or two standard plugs in that mess. One would be near the printers so operators could plug in the anti-static vacuum to clean paper dust out of the printers. Another would be behind the computers if your service contract mandated you provide an outlet for the technicians to plug in a work light. There was no need.

Then it happened


The number of companies willing to put in a special computer room with its own air conditioning and UPS was finite. It wasn’t just the expense of the room, but all of the terminals had to be permanently wired through the building. You couldn’t “redecorate” by moving desks around because the terminal wire had to be where it was.

Companies wanted to be able to move their computers around. They wanted “departmental” computer systems capable of running 10-20 users in a normal office environment. When one department was finished with something they wanted to be able to move the computer to another. Of course a few “moved out the door” over the weekend, then they suddenly wanted to chain it down, but that is another story.

I’m not making this up. You will note that MicroVAX II has wheels. The MicroVAX 3600 I had also had wheels. Both of these also plugged into a standard wall outlet. Hopefully you paid attention to the first part of this post.

Scenes like this became common in computer rooms

This became a common sight in computer rooms when the big boxes got replaced with the little wheeled boxes. You will remember I told you most computer rooms only had two standard outlets. One had to schedule both computer down time (usually a holiday) and an electrician’s time (not usually available on holidays) to replace the big round 20-60 AMP receptacles with standard 15 AMP outlets, so a power strip with a lot of stuff plugged into it became a common sight.

Power plug practice became problem

The cleaning crew (in many cases non-native language speakers) had always been allowed to unplug whatever was in the nearest outlet to run their equipment. Worst case, it was the vacuum someone forgot to put away or a lamp. No biggie.

Once it started being the power strip that had your four primary computers plugged into it, the problem was quickly realized. If your computers were drawing 12 AMPs and the other open outlet on the wall was on the same circuit their 6 AMP equipment also became a problem when it tripped the breaker and took out everything.

This lesson was learned in computer room after computer room around the world. The story seems like an Urban legend because it happened so many times. It was almost as common as someone backing into a car when backing out of a parking space.

Computer rooms weren’t the only ones to repeat this story. Every lab environment around the world has failed to learn the lesson.

The only people allowed to enter any lab must have already completed the degree training required to work there. You don’t send in the lowest wage worker on your payroll.

Why am I beating this horse?

1,900 doses of Moderna vaccine destroyed after cleaner unplugs freezer in Boston.

Toto said, the freezer at the Boston pharmacy “was in a secure location and had an alarm system installed. The plug was found loose after a contractor accidentally removed it while cleaning.”

When someone took down a corporate system it was annoying. Could even be funny if you weren’t the someone who took it down. Outright hilarious when cookie cutter MBAs busted a gasket about it because they are the “cut costs” chanting chickens who decided someone without an IT degree should be allowed in the computer room because they were “priced right.”

Today we see what MBAs and “priced right” gets you. Potentially more 19,000 lives lost. We don’t have a good way of tracking who would have gotten those doses, which ones contracted the disease, and who else they will infect before they themselves die.

Someone with a degree in that field would never unplug that freezer. Someone who is “priced right” will think “Oh, I’ll plug it back in when I’m done. It won’t hurt anything.” Then they will pull the power plug and continue on with their work. Usually they even remember to plug things back in.

How do I know that?

Because that’s exactly what happened with the MicroVAX computers. They would crash every night the cleaning crew would come in (usually weekly) and when off-hours support showed up everything looked fine. The machine was still plugged in, had power, no reason for the crash could be found. It took months to track these things down.

Fedora 33 Black Screen Again

Fedora and Nvidia. We can one day hope Fedora actually tests with Nvidia at some point in the future. For RPM based distros I just don’t hold out hope.

Few things piss me off more than being notified I need to apply updates only to find a busted system on reboot. Fedora is notorious for this. RPM based distros in general have this “never test it” problem, especially when it comes to NVIDIA. They always try to point the finger at NVIDIA and it is always the distro’s fault.

I have multiple machines running Manjaro, an ARCH based distro that have no problems. ARCH is far more bleeding edge than RPM. The difference is these distros actually bother to test, at least from a compile and install standpoint.

This is almost as bad as the Fedora 32 problem. For the purposes of this article we will assume you had the Fedora 32 problem and now have your NVidia 450 driver in your Downloads directory.

First thing you have to is hit <Alt><Ctrl><F2>. This will change you to a terminal login screen where you can actually login. Yes child, the mouse is now an ornament.

You hope that even though you installed your NVIDIA driver via DKMS that was supposed to build it every upgrade somehow that step just got missed. You CD to Downloads and ./ (using whatever filename you actually have of course.) You answer a few questions and hope for the best.

NVIDIA build failure

This is something the much praised CI (continuous integration) development model should have caught. Pure and simple, this won’t compile.

Finding the Fedora and NVIDIA Solution

Keep in mind this is only a temporary solution. The Fedora team will break this again.

I will save you a lot of trouble. You can find the 460 driver here. Click on the “Supported Products” tab and make sure your card is on it, then download it.

Now, that statement assumes you will be downloading from another machine. You could be “old school” and install one of these terminal browsers. Really disappointed with that list. I have an upcoming book on Emacs for my geek book series and it covers the Emacs Web browser.

So, we will assume you are either fortunate enough to have a friend or smart enough to have another machine handy. You download the new driver, copy it to a thumb drive, then what? The GUI always handled that mounting thing for you.

Kingston Data Traveler

I’ve used that old silver stick enough to know that it is a Kingston Data Traveler. The GUI is not going to auto-mount though, so we have to do a bit of digging.

fdisk -l output
sudo fdisk -l

That is the letter lowercase l and not the digit one. Don’t get lured into a mistake with this output. The physical device is /dev/sdc. The partition we are going to mount is /dev/sdc1. Your device and partition may well be different.

Next we have to make a place to mount this device. Mount it. Then copy the file to our Downloads directory so we can run it and have it on the target machine.

sudo mkdir /media/usb_1
sudo mount /dev/sdc1 /media/usb_1
cd /media/usb_1
cp ~/Downloads

Running it is much like running the previous 450 documented in this post. Once it builds and installs successfully you have one final command.

sudo reboot now

Fedora 33 will now work until the next untested update.

Related posts:

How to Install PostgreSQL on Fedora 33

Fedora 32 – Black Screen After Login

Linux Distros That Suck at Multiple Hard Drives

How to Find Your Range Extender’s IP Address

“How to find your range extender’s IP address?” is a sad question to have to ask. It’s a result of a network change. Not an automated, flawless change, but a series of forced manual changes and you not realizing until it is too late that you cannot access the admin page.

The backstory

Some of you are aware that I live rural when I’m not traveling for IT work. Went from paying $300/month for dial-up Internet (all of the access numbers were long distance) to various USB dongles from Verizon, Sprint, and AT&T plugged into a Cradlepoint router. Even had DirectPC and HughesNet.

I’ve had everything but actual cable trenched to the house. Honestly, I looked into that years ago. They wanted $8,000/mile and they needed to come about ten miles. That was north of twenty years ago. I don’t think it has gotten any closer because they always want to charge us to trench the cable.

For the past few years I’ve had Line-of-sight Internet and I keep a pre-pay Verizon 4G Jetpack around in case there is some huge outage while I’m working remotely. I must have pitched the Cradlepoint because I couldn’t find it to take pictures. It had a nice fail-over feature where you could set up a primary Internet port and tell it to use a cellular dongle when that Internet went down.

The experience has lead to some posts over the years. 2011 Adding WiFi to Your Whole Farm. Whole Farm Wifi 2015. 2017 Beware Range Extender Max Number of Users. Now it has lead to this one.

The recent event

Some tower upgrades had to happen a while back. There were six of us lucky customers whose radios had to have a firmware update that physically refused to apply from the tower. He was out in his van a long time trying to figure out what was wrong. Gave him access to my router and went about my day. Several hours later he left and I could check email.

After heading back out to my office I found I could not print to my color laser printer. This always had a fixed IP address reserved in the router. The reason for the fixed address is that I had a DS10 Alpha for many years.

DS10 Alpha and external SCSI enclosure

It was just easier to have an IP address when I set up a PostScript print queue. The Alpha went to recycling several years ago and I never bothered changing the printer. You don’t fix what isn’t broken.

After a bit of poking around I found the IP address of my router was different in the third group. Odd I thought. That printer was the only device not setup for DHCP so I manually went through the menus on the printer. When you can’t see it on your network, that is your second best option.

Lexmark CS310dn

After changing both interfaces to DHCP and rebooting the printer I could see it and print to it. This printer is good enough for my own personal printing, but I wouldn’t send anything out printed on it now. I’m just waiting for the last of several hundred dollars in toner to go through it before replacing it.

In hindsight, I really should have dug deeper

Ever since then I’ve had these periods of dramatic stutter-sputter Internet access. I kept reporting the issue and Amber couldn’t see anything on her end. (Yes, my line of sight provider is a small company and we know names.)

Yesterday this computer I’m typing on now just seemed to be dying when it came to network access. Not just Internet access. My NAS was difficult to reach. Print jobs went off into the mist for 10+- minutes before finding the printer. I noticed I was connected to the 2Ghz band because it showed the strongest signal. Forced the wifi to use 5Ghz and life seemed to get good.

This morning I had to dig into the problem. That was when I found Mike switched my router to be a bridge. It got me on the Internet, but a heads up would have been nice. The only real downside is that now the router doesn’t have a nice little list of connected devices because it isn’t providing DHCP in bridge mode.

My suspicion was that the regizzing of the router cause the range extender to use some default of 2Ghz connection, forcing all of my network traffic into that narrow little horse and buggy channel.

To confirm this I needed to access the admin page. None of the URLs in the documentation would bring the page up. As a last desperate gasp the documentation says to “use the IP address.” That sent me into the house where the router is and showed me it is now a bridge and as a bridge doesn’t have that nice little list.

There are command line tools, right?

Yeah. If you do networking you probably have everything installed. Those of us who don’t want to get that close to the metal anymore tend to only know a few. On Windows you are kind of screwed.

arp -a

You open a terminal with “Run as Administrator” and you type

arp -a

You can see the output (with my IP addresses smudged) in the above image. I tried every IP address in the browser and none of them opened up the range extender admin page. Took my laptop out to the office in case I had to be on the other side of the range extender to see it and same result.

Try Manjaro

The table I was working at had a machine on it running Manjaro. Linux distros just have to keep pissing people off. None of the old standby network tools are there. Not even ifconfig. If you want to know your machine’s IP address you have to type

ip address

A bit of fumbling around had me doing the following.

sudo pacman -Sy arp-scan
arp-scan output

The command I used was:

sudo arp-scan --interface=wlp6s0 --localnet

You need the output of “ip address” to know what interface value to use. What is really really nice is the fact it gives you the manufacturer names. As you could see there was only one Netgear device.

Yes, the reset had dropped everything back to channel 1, the lowest, weakest signal. I changed back to the highest channel/frequency and life is good!

How to install PostgreSQL on Fedora 33

XpsnQt uses PostgreSQL as its database so “How to install PostgreSQL on Fedora 33” became a question worth answering. Once again, most of the stuff you find online is horribly out of date.

Open a terminal and type the following to see just how much PostgreSQL stuff there is in the repos.

sudo dnf search postgresql

That is going to scroll a while because it is a lot! The actual install is accomplished via this terminal command.

sudo dnf install postgresql-server postgresql postgresql-server-devel

Always install the server development package no matter what platform you are working on. Many things need it and few of them are good about listing it as a dependency.

When you are done PostgreSQL will be installed, but not active.

After PostgreSQL install

Just like Manjaro, Fedora doesn’t initialize the database. Unlike Manjaro, it needs a lot more done for the init.

Initializing database

If you try to do what we did in this Manjaro post, it just won’t work. Don’t you love how consistent the Linux community is?

Yes, I tried it.

I could have looked at the shell script and figured out what else it was doing, but just wasn’t worth the pain since this machine only has one disk. If you look at the output of the legitimate command it appears I just needed to set up a log directory.

Now you need to both enable and start the service.

sudo systemctl enable --now postgresql

All that is left is to add a user and you can begin enjoying your database.

Adding user

Happy computing!

Related posts:

Fedora 33 Black Screen Again

Fedora 32 – Black Screen After Login

Linux Distros That Suck at Multiple Hard Drives