How to Install Jed from AUR on Manjaro Linux

One of the things most new (and even seasoned) users of Linux find frustrating is the organization of repositories. Linux is supposed to be the same yet when you want to install the jed text editor on Manjaro you have to go on a mining expedition and learn about the AUR (Arch User Repository).

Arch based distributions are notoriously difficult to use. Hopefully you read my earlier post about getting Boinc to run? With the Debian and RPM based distros it mostly kinda-sorta just works. Their package maintainers are incredibly remiss when it comes to including Oracle VirtualBox as a dependency, leaving people to scrounge. At least YABU distros have a custom Oracle VirtualBox package for Boinc, just the package maintainer for Boinc hasn’t been polite enough to flag as a dependency.

AUR

The Jed story is even worse. Arch based distros have created the concept of AUR (Arch User Repository). On the site it contains the following statement in bold.

DISCLAIMER: AUR packages are user produced content. Any use of the provided files is at your own risk.

If one believes all developers to be of good and decent making, this is a place for an industrious package developer to post the fruits of their labor and suddenly have it available to all ARCH based distributions. If one believes Russian/Chinese/insert-group-here hackers are out to distribute malicious code any way they can, then this is a cesspit from which there is no return.

The truth is found somewhere between those two goal posts. Most (possibly all) AUR packages must be built. If you had nothing better to do with your life you could reach and analyze each line of code before actually installing. You do that, right?

AUR is a community based thing. There are submission rules and if something is found to be malicious or non-conforming the moderators (or whatever they prefer to be called) will nuke it from AUR. Somebody, of course, must first be a victim that complains.

Given the build requirement, the very first time you install a package from AUR you get a lot of other stuff installed.

Jed command line install

AUR requires what Debian users would call “build-essentials” to be installed. Even if you never use jed as a text editor in a terminal, a new user of an Arch based distro should open a terminal window and install it first thing. You will then have most of the build environment you could ever want.

For those who just wanted the command without the knowledge, here:

sudo pamac build jed

For those who just want to search for a package in the AUR

sudo pamac search -a jed

Be warned that if you search for something common you will get a long list.

Boinc on Majaro

Historically, Boinc on Majaro has been a trail of tears for most users. Most other distros do a fine job of packaging Boinc for their flavor of Linux, but Majaro, not so much.

There is always a bunch of manual tasks one has to do to make things actually work. Most of these are cloaked in mystery and most of the Websites you find will have out of date information.

Search engines – the last great refuge of stale data

One of the real issues for users of Arch based distros is the constant churn and change. I first tried the Gnome edition of 5.20 and boinc manager failed to even start. I scrapped that attempt and pulled down the KDE edition. Life went better. Not perfect, but better.

It’s not all bad

One of the reasons I really wanted to try Majaro was the touted speed improvements. The other major reason was the “Live” ISO has an option to boot with proprietary drivers.

That’s right people, every proprietary driver it knows about for your machine it gives you the option to use. No more nightmares trying to find the correct Nvidia driver for your card. Plug-in all of those USB wireless adapters you have and see which ones it likes.

My test system isn’t seeing the major speed improvements, at least not at boot time. I did test boot on an FX-6100 AM3+ computer and it did seem to boot much faster than Ubuntu. The jury is still out but I’m inclined to give the benefit of the doubt considering the system I’m testing on.

i5 gen-4 test system

You know; I’ve always called this a gen-4 i5 because it was sold to me that way years ago. I bought it used as a spare for a project and wasn’t very detail oriented when it came to the processor. It just had to be an i5 or i7 for the project. I see now they stuck me with a gen-3. Doesn’t really matter. It was cheap and so long ago I can’t remember where I bought it. When I’m writing a new book it is still my favorite machine.

i5 with super floppy

It has an author’s best friend installed. One of my LS-120 Super Floppies. More than large enough to hold several copies of the book with images. Nice big label to write on. Easily transportable for off-site storage.

At any rate, that is what I’m testing on. As long as I’m not doing much disk I/O things seem pretty snappy. The 1TB drive is SATA-III 6Gb/s. It’s just that the SATA I/O capabilities of the machine aren’t blinding.

Installation

First you need to install virtualbox. A number of projects require virtualbox and most distros don’t bundle that dependency in with their boinc package.

Install the 2 with red buttons

Just install both virtualbox and the guest additions. LHC in particular needs this. On Ubuntu based distros the part of virtualbox needed by boinc is split out into its own package. It’s just not an automatically installed dependency.

packages to install

You need to install two packages. Yes, it should be one, but I’m going to save you some time and frustration here. There is a bug that Arch claims to have fixed. That fix has not made it to Majaro. I have an open forum discussion on the matter.

Feel free to do your own research as to who to blame. I poked a bit and it seems there are issues with certain versions of gtk things and newer g++ compiler versions. Welcome to OpenSource where nothing is actually tested. It’s a bit better situation than the one with Microsoft where nothing is really tested and they charge you money for it.

The Manual Steps

After you complete the install, open a terminal.

cd /var/lib/boinc
sudo chmod 640 gui_rpc_auth.cfg
sudo chmod o+r gui_rpc_auth.cfg

sudo systemctl enable boinc-client

Yes, there are people who can combine the chmod values in their head to do it in one line. I don’t mind hitting up arrow.

Please note: Lots of stale information on the Internet will turn up in search engines with respect to the enable statement above. It is wrong. The service name was changed to boinc-client.

I’ve never done any pacman packaging. It is on my list of things to do in the future. I have done Debian and RPM packaging. Those little lines above should be in the postinst step. In Debian and RPM based systems they are. They generally even add one more line.

sudo systemctl start boinc-client

You could choose to just start the service. I prefer to reboot. There will most likely be other dependencies that got installed and I just feel it is cleaner to reboot.

Note 2: If you

cd /var/lib/boinc

and find nothing there with ls command, you need to do the following:

sudo systemctl enable boinc-client
sudo systemctl start boinc-client

Start the GUI Boinc Manager and let it fail with a message gui_rpc_auth.cfg. Exit the GUI Boinc Manager then you can

cd /var/lib/boinc
sudo chmod 640 gui_rpc_auth.cfg
sudo chmod o+r gui_rpc_auth.cfg

The joy of continuous updates is the joy of having installation and configuration continually changing on you.

boinctui

Until that bug with Boinc Manager is fixed, you won’t be able to use the GUI to add projects. You will have to open a terminal and launch boinctui.

boinctui

Your version won’t already have projects and event log messages. It will be mostly blank. You can use the mouse in a “text mouse” manner. This will be confusing to younger people because the mouse pointer will appear GUI. You can click and double click but don’t try anything fancy.

Don’t expect it to be instant! The mouse clicks are going through several layers of things to get to the TUI (Text User Interface) application. Sometimes they get lost or eaten as a snack by another application. If you open the application in full screen mode life will be easier for you.

The help at the bottom

You will notice you can hit F9 to activate the top menu. Then you can use the arrow keys to navigate and the Enter key to select.

Projects Add project by URL

The simplest way to add a project is via URL. The most infuriating way to add a project is by URL. Oh, it is not because the data entry is difficult. So much has been done with the GUI for so many years that there simply isn’t a nice list you can easily find on-line of project URLs. This list is about as good as it gets.

The easiest thing for you might be to go to a machine that is already running boinc for projects you want to support. Select one of the projects on the projects tab then click the “Properties” button.

Properties dialog

You can see the URL is listed at the top. You can select with your mouse and save it to a text file or email yourself on the other machine. Here’s my list.

https://lhcathome.cern.ch/lhcathome/
http://www.cosmologyathome.org/
https://boinc.bakerlab.org/rosetta/
http://www.worldcommunitygrid.org/
https://csgrid.org/csg/
https://www.gpugrid.net/
http://einstein.phys.uwm.edu/
Project entry

Please note: Your “Backspace” key is useless here. Despite all the claims about VT-100 emulation in the Linux terminal world, it is obvious none of the people making those claims has actually used a VT-100. The TUI is expecting actual VT-100 keystrokes. If you make a typo you have to use <CTRL>-H to delete the character behind the cursor.

You must use the TAB key to navigate between these fields. As the little dialog says, Enter will transmit the dialog contents to the back end for processing.

After you’ve added your projects, life will be good. You can monitor them in the GUI manager.

Boinc running on Majaro

Breaching TLS/SSL

data breach imageIn some large part this essay is a follow-on to the “You Are the Security Breach” essay. It’s a result of a knock-down drag-out I got into on a technology mailing list. True I have quite a discussion about security in my upcoming “The Minimum You Need to Know About the Phallus of AGILE” book, but this particular discussion needed to be had in a more general context. Each and everyone of you is being put at risk by a combination of greed and stupidity. Yes, I know, in today’s politically correct world we aren’t supposed to use the word stupidity, but it fits. Ignorance is curable, stupidity is not.

Time and time again you will hear the mantra

There are three factors to security:

  1. authentication
  2. authorization
  3. encryption

Well, they are wrong. The fourth and most important factor is “don’t be stupid.” This is also the most often ignored factor because it is 100% fixable but it is not a one and done checkbox. In the previous post I told you about using things like

<ssn>123-45-6789</ssn>

in data transmissions. I also told you a bit about data striping. It’s time I also told you a bit about the dark world claims of tools which can breach TLS/SSL at will. During the knock-down drag-out I took about five minutes to think about how they are doing it, especially since most of the stories out there aren’t about breaching the sites, they are about decrypting a puddle of sniffed packet traffic. As far as I can tell there is absolutely no way to stop someone from parking software on the Internet saving copies of all packets passing through their location. They aren’t tampering, just logging. It should set off no alarms.

The classic argument that “it would take a super computer N years running full tilt to crack that encryption” is generally made by people talking out their ass. With botnets and server farms for lease, you can have the equivalent of 10,000 super computers at your disposal for very little money. According to this 2017 article the smallest of the top 4 botnets discovered and shut down had 6,000,000 infected computers and the largest 30,000,000. The owner of the one with a reported 30 million infected computers was earning US $139,000/month leasing out the net. So, if someone really wants to perform a brute force cracking attempt the computing power is out there. I haven’t done the math to find out just how many potential key values there are when combined with the number of supported encryption methods TLS/SSL has, but if you can make 30 million attempts per second, a trillion permutations won’t take long to run through. (Keep this bit of knowledge in mind as we approach this discussion from a different yet related direction.)

We will start with XML and why you should never use it.

Every conforming XML document successfully transmitted will start with the following line:

<?xml version="1.0" encoding="UTF-8"?>

Oh, the version number will change as may the encoding string, but the first 14 characters are required to be there per w3schools an wikipedia. The wikipedia link will also show you an optional second line:

<!DOCTYPE example [<!ENTITY copy "&#xA9;">]>

I will make the following uneducated assumptions:

  • Once a “secret” is negotiated during TLS/SSL handshake and the encryption method set, it remains in effect for the duration of the conversation.
  • You can use the same encryption method as is coded in the OpenSource code either without getting “wrapper bytes” or these bytes can be easily identified and stripped off.

“Wrapper bytes” might need a bit of explanation. Back in the days of DOS, if you used PKZip to compress a text file, it used to put a “pkzip” string at the beginning of the output with a version number, hashed password, yadda yadda. Some encryption methods will put wrapper bytes in front of or around the target output.

You need a PostgreSQL database (possibly multiple depending on disk size). One table.

Column name Data type Key Number Description
encrypted_value text/varchar 0 Encrypted value of first 12 characters
encryption_key_value text/varcahr 1.2 Key fed into encryption algorithm
encrypted_length int Length of the encrypted_value text. Used to determine how many threads needed.
algorithm int 1.1 Subscript into algorithm list.
hit_count int Number of times this encrypted value matched a packet.
completed text/varchar Timestamp string identifying when the encrypted_value column was filled in.
Dispatched text/varchar Timestamp string identifying when this record was dispatched to generate an encrypted value.

It is possible some encryption libraries/functions will generate more than the original 12 characters so we need the encrypted_length column. This allows us to determine all of the sliding windows we need. The column is initialized to zero.

The general idea is to use the database to drive everything. When you create the empty table you populate it with all permutations of encryption_key_value and algorithm. All other columns will be empty/null/zero depending on datatype. Populating the machine requires a work-dispatcher service and a results-receiver service. The dispatcher receives a request for work and performs

SELECT encryption_key_value, algorithm FROM encrypted_xml_table WHERE dispatched = “” LIMIT 100;

If it gets rows back it sends them out. If it comes up empty it has to select all rows dispatched over N hours ago which have completed = “” to get more work for the worker.

The results-receiver gets back a packet of records containing algorithm, encrypted_key_value, encrypted_value and encrypted_length. It uses the alternate key of algorithm (1.1) and encrypted_key_value (1.2) to update rows with these values and fill in the completed time as the current time.

While you need to fully populate all of the potential alternate key values, you do not need to have all of the encrypted_value fields filled in to begin using this. You just need a sufficient quantity to start having successes. Remember: computers suck at random. No matter how hard the developers try there will be a natural distribution (bell curve) of key+algorithm pairs which prove successful.

Now would be a good time for you to revisit the botnet portion of the discussion. A nefarious organization with access to a large botnet could get tens of millions of rows filled in over a short period of time. One could also have a batch of cast-off i5 and i7 gen4+ machines they choose to stick some NVIDIA cards in and use the CUDA cores. The “how” doesn’t matter. Yes, you have some programming to do, but it is not overly complex. All of the encryption methods are already in the OpenSource for TLS/SSL and you just need to pull them in and use them. Every worker is encrypting the same 12 characters over and over again. Your really bottleneck will be the database. You might have to do transactions of 1000 at a time so you aren’t constantly waiting for the database. (depending on how/where you hosted it)

Assuming you don’t use a botnet, you’ve done nothing illegal at this point. You’ve just created a personal research project to see how long it would take to generate all possible encryptions of a single 12 character string via the TLS/SSL algorithms. Heck you could even do some extra programming and create a BOINC project.

Once you have a few million records you can start the second part of the code and that will be a later post.

PCLinuxOS and BOINC

That gap between Christmas and New Years is a wonderful time if I’m not on project. That’s when I try to tick entries off that “I Wish I Had Time For…” list. Yes, there will probably be a few more posts about the LS-120 saga, but those will come later. For now I discuss the saga of PCLinuxOS and BOINC. Why here in my blog? Because continuing the conversation in private messages isn’t going to help anyone.

I’ve spoken many times about needing a minimal Linux distro for running BOINC. Whenever I’m going to need to break down a machine for use on something I take the time to try one or more disks I ordered from OSDisc.com. Some of the discussion on PCLinuxOS made it seem like it would be a good candidate. I just wish I had visited the forum and searched for BOINC before I bothered trying.

BOINC has been dropped from the repos. There is a bit of brew-ha-ha going on between myself and a few others in the vein of “no it doesn’t – yes it does.” Rather than enjoy that discussion on my own I thought I would let it educate you dear reader.

I’m sorry too, but there are NOT. Everything needed to run the binaries downloaded from Berkeley is already in the repo.

Which is exactly what I did and what I’m talking about.  For the last 13 years, I have used binaries downloaded from Berkeley, ending with the 7.2.42 version from 2014, which I’m still running today.  How could I be doing this if there are missing libraries?  Just this year and just in case it might be needed, I built 7.6.33 from source.   As it turns out, 7.2.42 continues to work fine for me so I’ve only installed 7.6.33 on a couple of machines for test purposes.

No, I’m not going to name the individual. This isn’t about being evil, this is about improper testing being used to form an opinion. We are all guilty of it at some point. I took pictures so you can run the test yourself and come to your own conclusion.

I chose to skip removing unused hardware support. When I checked the list NVIDIA driver was part of it. BOINC would need NVIDIA to make use of the 384 CUDA core in the machine.

Once the install was complete and I had rebooted I installed all updates after refreshing the package lists.

One thing which never ceases to amaze me in the Linux world is installing a brand new ISO only to find north of 300 files need updating.

Once the updates completed I went to the BOINC download page and selected the recommended version, 7.2.42. After it downloaded I rebooted.

You will note that I’m not as good with RPM based distros as I am with Debian based ones. Took me a bit to remember you have to su root.

A search via synaptic for libwx returned nothing. True, I could have dug out the arcane tools which will tell you just what package provides a certain file, but if libwx was mentioned in a package description it should have returned something. I could have performed a Web search which eventually would return this link for a generalized RPM packaging site. If you scroll down and click on the red bar for PCLinuxOS you see the following:

Which shows the package has been renamed so one would need to create a symbolic link from the command line. But, an ordinary user wouldn’t go to such lengths. An ordinary user would try it like I did, probably not even using synaptic to search for the package after getting the error message.

As to my dutiful corresponder, I have a theory as to why it works for you. It’s simple and not far fetched. You started with an older ISO of PCLinuxOS. One which existed well before the rename. You have been applying updates ever since and the updates aren’t good about deleting old stuff. On your 90+ machines which are successfully running BOINC, you have the library which is now missing from the repo. Yes, if it has been renamed for whatever reason, it is missing.

Because I’ve worked in many environments where testing was a religion, not an afterthought, I’ve developed the habit of testing from scorched earth. Even when I check source code into a repository for a client I rename the working directory then perform a clean pull and rebuild to ensure nothing was missed. People would laugh at me for doing that, until they realized the tool we were using wasn’t good when it came to identifying changed modules. Then they all started doing it.

Could I have hodge-podged getting BOINC to work? Probably, but an end user isn’t going to.

Now, I have spent all of the play time I had and won’t be conducting any further tests for many many months. The screen shots are here and everyone can conduct the test themselves. I suspect the rename came about from trying to keep 32 and 64-bit binaries in the same directory. Many distros used to do this, now most don’t.

KDE Neon – Distcc and Qt

One of the tools which was wildly touted years ago was distcc. This is a distributed compilation system which can be brutal to set up, but can also dramatically reduce compilation times for big jobs. It has fallen out of favor in recent years because most developers end up getting a quad or more core machine with a modern enough CPU to have all kinds of virtualization and hyper threading. These machines also tend to have many more Gig of RAM than they really need so, if Linux really is good about adjusting its disk cache memory usage, in theory you won’t see much boost. At least that is the argument I keep hearing and usually I hear that argument from people using laptops for development.

Flaws in the Thinking

So, please allow me to point out some flaws in that (mostly Millineals) thinking:

  • Laptops, unless their battery life is measured in mere minutes, _always_ have underpowered components. Yes, you may have lots of RAM, a great sounding graphics chipset, etc., but the hardware children will have opted for the lowest power consuming version of each. Even your USB ports will operate at both lower power and slower speed because the overall design goal was to make the batter last as long as possible. Be honest. When you are thinking about a new laptop and see “atrocious battery life” in the reviews, you click to the next one don’t you?
  • A sucky network isn’t going to make anything run faster. Most shops which complained profusely about distcc not returning much bang for the buck typically have a horrible network where people groan any time they have to transfer even a print job on it.
  • Both make and moc have gotten much better when it comes to working with distcc. One of the big drawbacks of building really complex Qt GUI applications with distcc used to be moc didn’t distribute well. I don’t notice the particulars, but I don’t notice a problem anymore.

Distcc Experiments

Any C++ Qt application with a sufficient number of source files can benefit from using distcc. Assuming your network isn’t a three legged dog running in deep snow, that is. As to hand tuning the disk cache and conducting other experiments with it, I don’t bother. You can read about a few experiments here. That mystical “sufficient number of files” threshold is much lower than you think. In order to verify this I need to install the distcc monitor.

distcc monitor install image
distcc monitor install

This is a little graphical tool which lets you see how your build is using the farm. After that I needed to install distcc itself. For some reasons the software application tool doesn’t included that, but you can find it with synaptic.

distcc-pump

distcc synaptic install image

I also installed distcc-pump. There are pluses and minuses here. The default configuration of distcc does not work with pump. There has been a bug with posts and reports dating back to 2007 and probably beyond where you end up with a rash of “connection refused” errors trying to let distcc dynamically find and use distcc servers on your network. I forgot about this and spent some head slamming time figuring it out.

The Next Step

Go that far on each machine which is to be part of your distcc compilation farm then open up a terminal and type the following:

distcc --show-hosts

On my main desktop it returned:

192.168.1.132:3632/24

192.168.1.105:3632/32

Now you need to know just who those machines are.

roland@roland-HP-Compaq-8100-Elite-SFF-PC:~$ nslookup 192.168.1.132
Server: 127.0.1.1
Address: 127.0.1.1#53

132.1.168.192.in-addr.arpa name = roland-desktop.

roland@roland-HP-Compaq-8100-Elite-SFF-PC:~$ nslookup 192.168.1.105
Server: 127.0.1.1
Address: 127.0.1.1#53

105.1.168.192.in-addr.arpa name = roland-HP-Compaq-8100-Elite-SFF-PC.

You need to tweak distcc just a touch

sudo nano /etc/default/distcc

STARTDISTCC="true"
ALLOWEDNETS="192.168.1.0/24 127.0.0.1"
LISTENER=""
ZEROCONF="false"

The above lines need to be in your distcc file. Of course you need to change 192.168.1.0/24 to be whatever your network is. Another major issue is that LISTENER needs to be blank. It just doesn’t seem to work any other way.

As I said, I had quite a bit of head slamming time trying to track down the “connection refused” issue so I made a few changes which may not be required. I’m going to list them here and we will experiment more during another post when I try to distcc from my Raspberry Pi.

$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 roland-HP-Compaq-8100-Elite-SFF-PC
192.168.1.132 roland-desktop

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

 

cat .distcc/hosts
localhost
roland-desktop

 

$ cat /etc/distcc/hosts
# As described in the distcc manpage, this file can be used for a global
# list of available distcc hosts.
#
# The list from this file will only be used, if neither the
# environment variable DISTCC_HOSTS, nor the file $HOME/.distcc/hosts
# contains a valid list of hosts.
#
# Add a list of hostnames in one line, seperated by spaces, here.
localhost
roland-desktop

Make options

Once all of that was done I was able to tweak the xpnsqt2 make parameters as follows:

distcc make options
distcc make options

Basically I added

-j40 CC=distcc CXX=distcc

You will find posts on the Internet telling you to add

QMAKE_CC = distcc
QMAKE_CXX = distcc

to your .pro file. While it is true this will cause qmake to generate

CC = distcc
CXX = distcc

it is also true that people tend to forget those are in there then post a project on SourceForge or their own Web site many people cannot build. The reason many wish to add it directly into the .pro file is so distcc gets used by the next user.

Those additional make options get saved in .pro.user not in the .pro file. If you are building on multiple machines or have multiple developers all using a common build environment it makes sense to put those values in the original .pro file and to look up how to force in the -j40 option as well. If you are working on something which will be released as an OpenSource project of some kind, best not to make those mods.

While I have not tried it I have seen posts stating you can define the CC and CXX environment variables to be distcc gcc and distcc g++ respectively. For someone working on many projects this is definitely the way to go, assuming it works.

Once all of that was done I fired up QtCreator, cleaned the project, then kicked off a build

distcc using other machine image
distcc using other machine

 

The machine I’m using as the build server is horribly named “roland-desktop.” It is an AMD 6-core machine with 20Gb of RAM and an SSD. I should also mention it is running BOINC while idle. The machine which is actually on my desktop is that Compaq-8100-Elite blah blah blah machine. It is a quad-core I7 having 16Gb of RAM and an SSD.

As the monitor clearly shows, even though the build server is weighted down by BOINC, a project this small benefited from distcc.

Related posts:

Where Did My QDebug Output Go?

MOC Parse Error at “std”

So You Can’t Get Your Models to Work with QML?

QtCreator – No qmlScene installed

CopperSpice Experiments