Linux Distros That Suck at Multiple Hard Drives

Some Linux distros really suck at dealing with multiple hard drives. Too many “maintainers” only have a laptop.

Background

You need a wee bit of background before we jump in. Hopefully you can see the featured image. Recently picked up this Lenovo M93p ThinkCentre from eBay. I specifically bought an M93p instead of M83 because I wanted two hard drives. I had a 480 GB SSD I wanted to transfer the Windows 10 over to and I had a 6TB Western Digital Black I wanted to use for the other operating systems.

Why did I buy this particular M93p?

Lenovo M93p Ports

I actually added the PS/2 ports today. The little cable showed up to do that. It already had both serial ports, wifi, and the NVIDIA add-on video card. If your eyes are real good you will notice that on the other side of that Wifi antenna is a parallel port.

Software engineers need a lot of ports. If book sales start picking up I may even have to break down and buy another dot matrix printer to print shipping labels with. Yes, parallel port dot matrix printers are still made. You can buy them from newegg.com today. There are lots of legal requirements to print with impact printers on multi-part forms in various shipping and transport industries. They also do a more economic and reliable job on mailing labels . . . if you buy the right one . . . and you have the proper printer stand.

Printer stand back

The best ones from days of old have both a center feed slot and a rear feed slot to accommodate either type of printer. Long time readers of this blog will remember I started work on a Qt and USB series and then life got in the way. That was all USB serial ports talking to real serial ports. My Raspberry Qt series also involved quite a bit of serial port work. My How Far We’ve Come series also involved quite a bit of serial port stuff as well.

Putting it mildly, I still do a fair bit of serial port work from time to time. If I get done with RedDiamond and RedBug without life getting in the way I’m going to start a new post series using CopperSpice and serial ports. The makers of Qt have honked off their installed base with the new “subscription licensing” for Qt 6.x and beyond. Even more honkable, if that is possible, is the chatter that they are trying to license the OpenSource QtCreator as well. Yeah, people are making a hasty exit from the Qt world and many are headed to CopperSpice.

Sadly Needed Windows

Unlike every other machine in this office, I needed to have Windows on this machine. There is some stuff coming up that will require it. There is no way in Hell I was going to try writing my serial port code using Linux in a VM. I may edit it there, but testing is a completely different story.

You’ve never spent days trying to track down why some characters don’t get through. Worse yet, the serial port just “stops working.” After you do a bunch of digging you find that someone baked in some super secret control strings to do special things in the interface driver of the VM. Nothing nefarious. Usually to support “remoting in” via cable connection.

Boot Managers

In the days of DOS and GUI DOS that Microsoft insisted on calling Windows, this was no big deal. BootMagic and about a dozen other competitors existed to help Noobies and seasoned pros alike install multiple operating systems onto the same computer. Honestly, I can’t even remember all of the different products that had a brief life helping with this very task.

OS/2 had Boot Manager backed in. Those of us needing to develop for multiple operating systems usually ran OS/2 as our primary. It just made life so much easier.

Early floppy based Linux distributions came with Lilo. It was generally pretty good at realizing Linux wasn’t going to be on the primary disk. SCSI controllers could support six drives and distributions were different enough you had to boot and build on each.

Grub

Later many distros went with Grub. To this day Grub has issues. The biggest issue is that each Linux distro adopts new versions of Grub at their own pace and Grub has a bit of history when it comes to releasing incompatible versions.

Adding insult to injury is the fact many Linux distros like to hide files Grub needs in different places. When you run your distros version of “update-grub” (as it is called in Ubuntu) it has to be a real good guesser when it wants to add a Grub menu line for a different distro.

Your second fatal injury happens during updates. Say you have an RPM based distro but have Ubuntu as the primary Grub OS. When your RPM based distro updates and changes the boot options for its own Grub menu entry in its own little world it has no way of informing the Grub that is actually going to attempt booting. Sometimes an “update-grub” will fix it and sometimes it won’t. A bit heavier on won’t that will.

Drives got too big

That’s the real problem. During the SCSI days when 80MEG was a whopper we put each OS on its own disk and just changed the boot device. That was our “boot manager.” Every OS existed in its own little universe.

As drives got bigger various “boot manager” type products could play games with MBR based partitions. Only one partition could be “active” so a tiny little boot manager got stuff into the MBR and it changed the active partition to match the requested OS.

Cheap but effective trick as long as you didn’t need more than four partitions. Only a primary partition could be flagged for active booting. Lilo and the other Linux boot managers started allowing Linux distros to boot from Extended partitions.

Today we have GPT and UEFI

I’m not intimate with how these work. The Unified Extensible Firmware Interface (UEFI) created the spec for GUID Partition Table (GPT). {A GUID is a Globally Unique Identifier for those who don’t know. That’s really more than you need to know.}

Theoretically we can have an unlimited number of partitions but Microsoft and Windows have capped support at 128. The UEFI should be replacing Grub, Lilo, and all of these other “boot manager” type techniques.

We shouldn’t have all of these problems

As you install each OS it should obtain its partition GUID then find the boot device and locate the UEFI partition on it. Then it should look for a matching GUID to update and if not found, create an entry. There is a spec so every entry should be following the same rules.

(If you read up on the OS/2 boot manager you will see that from the 10,000 foot level UEFI and the OS/2 boot manager conceptually have a lot in common.)

When any computer boots from UEFI and there are multiple operating systems in the UEFI partition, UEFI should show the menu and let the user select. This should all be in hardware and firmware now. We shouldn’t have Microsoft trying to lock us into their buggy insecure OS and Linux distros shouldn’t be trying to ham-fist Grub into UEFI.

The Split

I wanted all Linux distros to boot from the 6TB drive. I wanted Windows and UEFI to stay on the tiny SSD. This isn’t unreasonable. As all of the background should tell you, I’ve been doing things like this for decades. I did not want to try and stuff everything on the 6TB.

Each Linux distro would get 500 GB – 800 GB depending on how much I thought I would be doing in them. This means I should be able to put up to 12 different distros on the drive.

That may sound like a lot, but it’s not. You’ve never written code that worked perfectly on a Ubuntu LTS and failed rather bad on some of the YABUs supposedly using that LTS as their base . . . I have. The only way to know things for certain is to have a bunch of test systems. When you are testing serial port (or other device stuff) you need to be running on hardware, not in a VM.

Manjaro was the first failure

Manjaro kernel 5.9.16-1 was actually a double failure. I have this distro running on a pair of machines, but it is the only OS on them. Rather like what they’ve done with the KDE desktop. I rather hate the fact PostgreSQL cannot access the /tmp directory bulk import to restore a database doesn’t work on that platform. There are a few other odd Manjaro bugs as well.

I wanted to do some pacman packaging and some testing of the future serial port code in CopperSpice on Manjaro so it was first on the list. It booted fast and seemed to install clean. Rebooted the computer and boom, Windows came up. Navigated to the Advanced Settings under Settings in Control panel and tried to switch the boot OS. Boom! Windows is the only entry.

(*&^)(*&)(*

Let’s Install Ubuntu!

I had real dread when I reached for Ubuntu. That installer has had a lot of assumptions baked into it over the years. I was pleasantly surprised and slightly disturbed.

Installation went smooth and when I rebooted I was greeted with a Grub menu. Both Windows and Manjaro were on the Grub menu, but, should we really be seeing Grub on a UEFI system with multiple operating systems? Shouldn’t there be a UEFI menu that just has an entry for Ubuntu and when you select Ubuntu shouldn’t that be when you see a Ubuntu Grub menu?

Let’s See if Manjaro Boots Now!

Once I verified Ubuntu could boot and apply updates I rebooted and selected Manjaro. That’s as far as you get. The Lenovo logo stays on the screen and nothing else happens. HP owners have the same problem according to Reddit.

Fedora 33 Was Next

The Fedora installer was the worst of the lot. If you chose the second drive via one of the manual methods, it looked for a UEFI partition on that drive. It wasn’t smart enough to determine what the boot device was and go look there. You couldn’t get out of the screen either. There was no back or cancel, you had to power down.

Summary

Manjaro at least tried to install. It failed to create anything in the UEFI partition of the boot disk and it failed to show any error with respect to UEFI creation failure. It refuses to boot from the entry Ubuntu created for it in Grub. Double failure. I suspect this is due to a combination of super secret stuff needed on the menu entry, Manjaro using a different version of Grub, and Manjaro potentially hiding the files in a place Ubuntu doesn’t know to look.

Fedora failed to get out of the starting blocks. That graphical installer needs a whole lot of work!

Ubuntu worked despite my expectations of abject failure.

Just because Ubuntu worked doesn’t mean every YABU will. Most tend to write their own installers. If the developer working on the installer only has a laptop, they are going to take unreasonable shortcuts.

Related posts:

Fedora 33 Black Screen Again

How to Install PostgreSQL on Fedora 33

Fedora 32 – Black Screen After Login

Experiments with IUP Pt. 1

It seems rainy days and rabbit holes go hand in hand. I keep trying to get back to writing my GnuCOBOL book (having finished the first draft of the Emacs book) and I keep finding rabbit holes. In particular, I got involved in the GnuCOBOL GUI discussion.

The main version of GnuCOBOL is a transpiler. It translates the COBOL code to C then uses gcc to compile it. There is a C++ fork being worked on that I haven’t played with yet. Given all of the work I’ve done using Qt, I was interested in hearing there was consideration being given to using Qt for the GUI of the C++ fork. At first it sounds cool to have COBOL generate your Qt application, but the more I thought about the massive footprint and “adopt it as a complete religion” view of the framework, I advised against looking at Qt. You can’t “just sprinkle in” Qt. I get phone calls about projects from people/companies who tried to do that. At some point I’m going to play with NanoGUI and probably recommend they use that. It claims to be just a UI without the massive overreach of Qt.

Here is the other reason I would recommend they avoid Qt, even if I didn’t foresee technical problems. Qt Company has really honked off the OpenSource community with their licensing. As such, I expect KDE will be kicking Qt Company to the curb. I suspect they are a bit too heavily invested in Qt to kick it all the way to the curb. Rumblings I’ve heard is something I suggested a while back. A fork of 4.8, ripping out all of that worthless QML then splitting the packaging up. We shall see what becomes of it. There are too many people looking to get involved in such a project, potentially renaming the fork to avoid confusion. Last I heard, all that FLOSS needs is a sponsor. Exactly what a sponsor does or would be on the hook for, I have no idea. I honestly don’t care to know. Whether it is FLOSS or someone else, a fork of Qt is imminent at this point. Maybe KDE will kick Qt to the curb completely and just start with NanoGUI? Instead of one massive overreaching application framework, have a lot of little cooperative frameworks. At this point Qt definitely needs to be a contestant on “The Biggest Loser.”

Anyway, I pulled down iup from SourceForge.

sourceforge screenshot

I extracted the file into a subdirectory under Downloads. In the new subdirectory you find two files of importance:  install   install_dev

The first installs the shared libraries (.so). The second installs the header files and statically linked libraries for development. I ran them both just for grins, running install_dev first.

Like most OpenSource projects, the tutorial is out of date. It seems the only time people are willing to “do the paperwork” is in the bathroom after taking a good Trump and voting at least twice. Between email interruptions and Stevie Wonder leading Ray Charles through the woods, I got the first example to compile and run.

#include <stdlib.h>
#include <iup.h>

int main(int argc, char **argv)
{
  IupOpen(&argc, &argv);
  
  IupMessage("Hello World 1", "Hello world from IUP.");
  
  IupClose();
  return EXIT_SUCCESS;
}

Where the documentation went a bit off the rails was the command line to compile. It appears someone split the libraries into finer groupings. Here is the command line.

gcc -I/usr/include/iup example2_1.c -o example2_1 -liup -liupimglib

For whatever reason the tutorial skips adding the iup library. I’m guessing someone had an environment variable set or what is in iup got split off from iupimglib.

The problem with all GTK based applications is they look like an alien invader or something that stood too close to a nuclear reactor for too long. Then again, so did a quick test of a Qt application on this KDE system.

#include <QApplication>
#include <QMessageBox>

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    QMessageBox msg(QMessageBox::Information, "HelloQt", "Hello World!", QMessageBox::Ok );
    msg.exec();
    return a.exec();
}
QT       += core gui

greaterThan(QT_MAJOR_VERSION, 4): QT += widgets

CONFIG += c++11

# The following define makes your compiler emit warnings if you use
# any Qt feature that has been marked deprecated (the exact warnings
# depend on your compiler). Please consult the documentation of the
# deprecated API in order to know how to port your code away from it.
DEFINES += QT_DEPRECATED_WARNINGS

# You can also make your code fail to compile if it uses deprecated APIs.
# In order to do so, uncomment the following line.
# You can also select to disable deprecated APIs only up to a certain version of Qt.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000    # disables all the APIs deprecated before Qt 6.0.0

SOURCES += \
    main.cpp

HEADERS +=

FORMS +=

# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target

Changing the icon requires just a touch more finesse under Qt.

#include <QApplication>
#include <QMessageBox>
#include <QIcon>

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    QMessageBox msg(QMessageBox::Information, "HelloQt", "Hello World!", QMessageBox::Ok );
    msg.setWindowIcon(QIcon::fromTheme("system-run"));
    msg.exec();
    return msg.exec();
}

It isn’t fair to say I cannot find one in iup. I haven’t really dug into anything yet.

First Look – Solydxk

This is really what KDE Neon should have been. Not some buggy YABU but straight off of Debian. Hopefully those in charge will consume this distro and re-brand it KDE Neon.

Having heaped that kind of praise it is not without its quirks. While the “Welcome” screen slide show you manually navigate through seems nice, most of the “install” buttons just show the user a blue Cylon Eye with not so much as an idiot indicator bar. No messages and no concept of just how long something was going to take. I tried to install the NVidia drivers this way three times. Each time it took forever and claimed success. Each reboot showed that nothing worked.

Finally I closed that and ran the “Device Driver Manager” directly. The second or third time it actually worked. I believe there is some needed piece missing from the dependency list. I installed a few other software packages such as BOINC and some editors and things before rebooting and the next installation attempt magically worked.

Well, worked might be a big of a stretch.

While I could see the NVidia logo splash on the screen and pull up the NVidia control center to see I had driver 340.xxx, BOINC could not find the GPU. A bit of poking around confirmed what I suspected, 340.xxx isn’t packaged correctly in the repositories. I probably should not have installed BOINC until NVidia was installed either. Synaptic package manager to the rescue.

  1. Mark for re-install boinc, boinc-client and boinc-manager. Yes, boinc is a meta package, but just do all 3.
  2. Mark libboinc-app7 for installation
  3. search for “cuda” without the quotes and mark libcuda1 for installation as well as boinc-nvidia-cuda.
  4. Apply all changes and reboot. Your BONIC event log should happily find the cuda.

This is the distro which has so far managed to stay on the 6-core.

Most of the Linux world has become dissatisfied with Ubuntu. As the heads of the company desperately try to get rich with an IPO cash grab, the quality of what everyone does has suffered greatly. It also appears Canonical may be positioning itself to abandoned Linux like Google is abandoning its Linux-Android bastardization. Google is focusing on Fuchsia which is based on Magenta and Canonical has its own repos for Magenta. Scroll down and read the text found at the link.

It does not surprise me to learn both Google and Canonical are potentially abandoning Linux. The kernel and security infrastructure have had severe design flaws since day one. You cannot make it secure without creating a completely new kernel from scratch passing everything via descriptor. The OS needs to support logical name tables which have full SOGW (System, Owner, Group, World) and ACL (Access Control List) level security on each table.

Honestly, I have not pulled down the source for Magenta or Fuchsia. I know this though. Both Google and Canonical are looking for “one OS to rule them all” from identity theft enabling device all the way up to desktop so, security, if it exists at all, will be a bolt-on of North Korean knock-off quality. You simply can’t add shit like that later, it has to be designed in.

Don’t fret little campers. Microsoft will be taking over/forking the Ubuntu distro, following the same path as Apple. What? You thought Apple still made an OS? Nah, they put a pretty pretty front end on BSD. While I would never install a virus known as Windows on any computer made in the last 10 years, some who do inflict this terror onto devices tell me under Windows 10 the included Bash shell is actually Ubuntu. I suspect they are several years away from having their own desktop sitting on top of Ubuntu, completely abandoning Windows as an OS but still calling whatever they ship Windows. They have already end-of-lifed Windows 10 Mobile and provided no migration path which makes me suspect they are banking on Magenta panning out.

Personally I consider that a bit more honest than frantically working on Fuchsia without officially announcing the bastard child known as Android will soon be taken out to the woods and have 2 put behind its ear.

First Look at KDE Neon 5.8.6

KDE Neon screen shotI finally got fed up with the sluggish performance of Linux Lite on my I7 Quad-core. I mean a box with 16Gig of RAM, an SSD and an Nvidia card having 384 CUDA core ought to run much better. Indeed it has run much better with other distros but those distros had issues I couldn’t live with and couldn’t take the time to try and fix. Eventually I will be playing with some YOCTO builds on my 6-core AMD so needed to find a responsive distro.

Installation occurred on 3 different machines:

  1. I7 Quad-core HP small form factor desktop with Nvidia card 16Gig RAM
  2. HP laptop (sorry don’t have it handy but believe it to be AMD based) 8Gig RAM
  3. Acer Aspire One 722-0022 netbook AMD C60 dual core 8Gig RAM

When you get the proper OS on that little Acer and get used to the tinier keyboard, it’s a sweet little machine. The only netbook ever made worth buying. Linux Lite was making it run like a dual floppy 286.

Installation on the I7 wasn’t rough, but wasn’t good. Once again I was bitten by the Ubuntu-don’t-test-shit bug. Because this desktop has a primary SSD and a 1TB drive, it booted to a Grub error after install. I’m sooooo used to this I didn’t even flinch at having to pull out my boot repair disk after a fresh install.

The Discover software center is a bit of a train wreck. If you have a lot of updates to apply don’t be surprised if it just stops without error or warning during the update process. When that happens experiment with ways of killing off the Discover package, open up a terminal and use the following commands:

sudo apt-get update

sudo apt-get upgrade

This assumes you don’t have a lock to deal with. There are plenty of posts on-line about how to get rid of the update lock.

About all you want to do with the Discover package is use it to install Synaptic Package Manager then use this trusted tool which always works to install everything else you want like Thunderbird and Libre Office. Yes, unlike so many other KDE desktop distros, this one didn’t stick you with worthless packages like KMail and Caligra.

Yes, I still had to hack the configuration to make the USB recognize my Doro 626.

sudo nano /etc/usb_modeswitch.conf

Change the line:

DisableSwitching=0

to

DisableSwitching=1

save, exit and reboot. Now when you plug your phone it it behaves as it should.

One of the recent updates might have fixed that though. I just checked my usb_modeswitch.conf file before writing this and it is back to the zero value and my phone still works when plugged in.

The biggest issue you will run into is KDE Neon does not provide a “Drivers” option anywhere in the installed applications. You know, one of those nice little graphical tools which chews on your machine for a while then magically spits up a list of proprietary non-free drivers available for installation. Once again you need to go to the command line:

roland@roland-HP-Compaq-8100-Elite-SFF-PC:~$ sudo ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:03.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001284sv000019DAsd00001308bc03sc00i00
model    : GK208 [GeForce GT 630 Rev. 2]
vendor   : NVIDIA Corporation
driver   : nvidia-375 - distro non-free recommended
driver   : nvidia-340 - distro non-free
driver   : xserver-xorg-video-nouveau - distro free builtin

== cpu-microcode.py ==
driver   : intel-microcode - distro non-free

Once you have identified the available drivers for your machine use sudo apt install to install them by hand. Yes, I still type apt-get a lot. I realize there is now a synonym of just apt, I just don’t always remember while typing.

After installing your Nvidia driver and rebooting you are ready to install virtualbox and BOINC. If you happen to participate in LHC it won’t take many days before the infamous “virtualbox not installed” notices start to appear. It’s another case of Ubuntu-don’t-test-shit. Sadly, this hack no longer works. The original Ubuntu base has historically had problems with protection settings on the installation directories which the hack used to get around. There appears to be a deeper problem now.

Let’s be honest. If you are professional enough to be running Linux instead of that obsolete Windows platform, you should be helping the LHC project out as well. We can’t live in a real world Star Trek until we get this anti-matter thing all worked out. We just have to get a Linux distro which either A) installs virtualbox correctly or B) automatically adds BOINC to the virtualbox groups so it can read/write to all of the virtualbox directories. Actually, non-YABU (Yet Another uBUntu) distros don’t have this problem. It appears to be a Ubuntu only pooch.

Now we get to the most infuriating part of KDE Neon.

K Desktop developers use KDevelop for everything. I won’t pretend to know much about KDevelop. When I looked at it years ago it made Eclipse seem lightweight. It had syntax and development support built in for every language which had ever been ported to Unix or Linux. Don’t think of it as an editor like UltraEdit with syntax highlighting for a zillion languages, this had full fledged IDE support for each language. I don’t think it is possible to overstate the topic here. Every Bell Labs worker and college student who had to create their own language to achieve some kind of curriculum goal seemed to have added support to KDevelop for their new language which had, at best, 3 users.

If you open up the Synaptic Package Manager and search for “compiler” you will see hundreds of files, but, if you scroll down looking at the descriptions you will see only about a dozen languages. Old standbys like C/C++, COBOL, FORTRAN, ADA and a few others, but all of those really weird 1-5 letter languages are gone. I was shocked there was only a BASIC interpreter now, not a compiler. It appears the BASIC compilers have their own little worlds now and don’t get pulled into distros much. I mean FreeBASIC has been around forever. As one can see, there are still quite a few BASIC compilers out there for Linux. I was really shocked to stumble across KBasic. Given all of the Qt work I do I must admit to never having heard of it.

At any rate, sorry for the diversion there. The K Desktop developers use KDevelop (which has lost about 7/8ths of itself by dropping support for all but a few languages) and KDevelop has its own project, build, etc. They don’t need, want or desire QtCreator because it is not an easy transition between the two worlds. The K Desktop is built uses Qt. All of the libraries and things needed for KDevelop users to code and compile are there. QtCreator, however, will not install. One of its dependencies is too far behind the bleeding edge included with the desktop.

Calibre also will not install due to the same dependency issue so, don’t plan on using it to read ebooks.

Even though I’m a wee bit ticked off about not having QtCreator and Calibre, I must say this desktop feels lighter than a feather on all 3 machines. After I got all done getting the desktop to work “mostly” the way I want, I installed on the HP laptop without a hitch.

The Acer is a different story. No proprietary drivers are listed for its video. The default driver has a nasty flicker problem. I mean when dialogs pop up for a user response it gets all discombobulated flickering back and forth. Sometimes it leaves the dialog in the background. Other times the screen doesn’t repaint itself when the dialog goes away. There was also an oddity installing the Opera Web browser. I used the exact same .deb file on my desktop then the HP laptop without issue, but on the Acer it just would not install. I was able to install from the Opera site though. As I said, it was odd.

I cannot wait for KDE Neon to jettison its Ubuntu baggage and go full on Debian. I have gotten sooooo tired of Ubuntu-don’t-test-shit, especially when it comes to KDE.

First Impressions of Mint 18 KDE

I originally tried Mint 18 KDE on my HP laptop. The plaque on the back says it is a 355 G2 if that makes any difference. I was underwhelmed with the beta. It had a vicious lock screen bug. If you required your password for the lock screen, you were toast. There are various discussions about hacks and work arounds, but none of them seem to work on the laptop.

There is also more than one bug report: rpt_1  rpt_2

About a week ago I pulled down the released version and installed it on my HP 8100 Elite small form factor desktop, replacing Ubuntu 16.04 because it was sloooow and Unity really blows as a desktop. I was impressed that the first time the screen saver came up it showed a message saying the screen locker was broken and you needed to do the following:

<Ctrl><Alt><F2>

login with password

loginctl unlock-sessions

then <Ctrl><Alt><F9> to get back to the windowing environment.

I was so impressed by this I did a fresh install on my HP laptop. No love! Never saw that screen saver message and cannot use the <Ctrl><Alt><F2> hack around. Probably has something to do with the blue <Fn> not being operational when the lock screen is up, but I do not know. I do know that it is a hard power down fix at that point so you have to set your screen locker to the maximum value. There does not appear to be a method of disabling the screen locker. I guess one could try to find the system setting which changes “Require password to unlock” false, but then you have an even larger security problem.

The default settings for KMail are completely unusable. Between the font, theme and color scheme chosen you cannot read your message list. There is yet another nasty bug in KMail, you cannot change the size of the font in the message list. Oh, to be sure there is a setting for it and you can actually change the font itself, but not the size. If you change the size of the ICON the entry in the list will get a bit taller making the font seem a little larger, but still unreadable for most. The light blue on white is a bit tough on the eyes, but if you change the message list font to Liberation Mono you can at least make it bold so it is more readable.

One thing worthy of note. Ubuntu 16.04 64-bit with all updates applied was slooooooow on my desktop, especially when trying to surf the Web. The beta of Mint 18 KDE on the laptop could get to the same Web page much faster than the desktop both going through the same router at the same time. Everything seemed slow. This desktop is a quad core I7 with 16 Gig of RAM and a super fast hard drive not to mention a video card containing 384 CUDA cores. It ought to fly. With Mint 18 KDE installed it has become its snappy old self again.

Btw, trying to convert KDE to a green theme was a truly wretched idea. Thankfully it is only bits and pieces showing the ugly green.