Posted inInformation Technology

That Debian Multi-Arch Packaging Thing

Debian logo

Of my roughly 40 year career in IT, the past decade has been spent in the world of embedded systems cross compiling via Multi-Arch packages. Some systems require building inside of Docker containers which is not that good but allows more of the never-went-to-college crowd into embedded systems as low wage workers.

I’ve done a lot of Debian (and some RPM) packaging over the years. There was a time when all of the SOM/SOC vendors created development universes around Ubuntu but now the world is abandoning Ubuntu for Debian. I hear some are even moving to Manjaro. The dual viruses of “unattended upgrades” and “upgrade to Ubuntu Pro” nag-o-grams have become intolerable. I’ve not worked with them yet, but I hear some vendors are even moving to Manjaro. The Yocto Project likes Debian.

A Tale of Two Frustrations

Docker

Some view it as Utopia, even lightly seasoned pros don’t. You install some Docker-centric build tools from your hardware vendor, then you pull down/create a Docker container in emulation mode with a stripped down Buster (or other). After that you spend a few days finding out few, if any, of the packages you wish to use are available for install for your target. This causes you to create scripts to retrieve and build source inside the container, and you have to build the dependencies in the proper order. Newbies usually hose this at least twice. Then they learn:

One new dependency build == one new base container uploaded to hub that gets used for next attempt.

Once you successfully generate a containerized app, you have to actually get it to the target. Oh yeah, the target probably needs something other than the default OS.

Yocto

Yocto is not for the faint of heart. You need a lot of cores and fast spinning disks. SSD only seems fast. They tend to have a not-small write cache. Once you pop past the end of that cache and have to wait for writes to complete they get very slow. I tend to stuff WD Black or higher end Barracuda spinning disks in my build machines for the actual build. You will be creating somewhere between 30K and 50K temporary object files when building a complete Linux for your target.

On an i7-gen4 loaded with all the RAM you can force onto the board and using the fastest disks you can find, a full Yocto build can take upwards of 27 hours. Get it right the first time!

While I did recently drop almost $2K building an i9-gen13 machine for Yocto and other development purposes.

As long as you don’t have your computer in your bedroom or where you watch TV, I can highly recommend one of these.

Stick your own fast spinning disks in it and even with a sucky 25Mpbs line-of-sight Internet connection your builds will get done in well under 4 hours. (Yocto builds have to do a lot of version checking and source retrieval.) Actually, that time frame is running inside a VM under Windows 10. I haven’t done a full Yocto build now that I wiped it and put Manjaro on. Theoretically, using then entire machine it could get done in about an hour.

Summary of Frustrations

No matter what, someone has to do a Yocto (or other) OS build for your target. You won’t pay attention to it when you are blindly installing what your hardware vendor told you to install, but both of these methods require Multi-Arch support. You sure as Hell don’t want to build an OS on the target. Even the Raspberry Pi crowd is starting to learn about cross compiling these days. It may have enough hardware to run one or a few applications, but as a full desktop for development it sucks!

For the OS build you need a lot of cores and a lot of RAM and very fast spinning disks. SSDs have a short lifespan when placed under this kind of load.

The Artifact

What sent me down a rabbit hole is an artifact.

debian /usr/lib tree

Yeah. Spent a day or so around various interruptions changing my Debian packaging script so it would park library files under /usr/lib/x86_64-linux-gnu. Then I poked around and noticed /usr/include had no corresponding directory. A question posted on the Debian forum didn’t shed any light either.

The Experiment

I cloned my Debian 12 VM. Once I booted, opened a terminal and did the following:

sudo dpkg --print-foreign-architectures
sudo dpkg --add-architecture arm64
sudo dpkg --print-foreign-architectures

sudo apt-get update
sudo apt-get install build-essential crossbuild-essential-arm64
sudo apt-get install gcc g++ cmake make ninja-build codeblocks
sudo apt-get upgrade
sudo apt-get autoremove

sudo apt-get install libx11-dev libx11-dev:arm64 libx11-xcb-dev libx11-xcb-dev:arm64

May you live in interesting times.

Ancient chinese curse
foreign architecture

So, we have the new architecture enabled. Notice the new directory though.

after arm64

Consistency is too much to ask for in the world of Linux. The package architecture is arm64 but the hardware architecture is aarch64. Thank you very much.

aarch64-linux_gnu

Basically, for Multi-Arch support they now create an architecture-linux-gnu directory under usr so the emulator environments can chroot (or whatever) to this.

Yep, that’s where the header files are.

Summing It All Up

No matter what architecture you are building your Debian package for, you package into the standard /usr/lib, /usr/include, etc. directories. The control file is where the magic happens. You need both an Architecture entry

target architecture

and a Muti-Arch entry. Usually this entry must be set to “same” for binary packages.

You also have to adhere to package naming conventions.

packagename_version-release_architecture.deb

The “architecture” portion of that name has to be the dpkg architecture tag, not the directory architecture tag.

sudo dpkg-architecture -L

Use the above command to get the full list of what dpkg supports on your distro. It’s a long list. You probably want to pipe it into grep if you have some idea what might be in your architecture name/tag.

Roland Hughes started his IT career in the early 1980s. He quickly became a consultant and president of Logikal Solutions, a software consulting firm specializing in OpenVMS application and C++/Qt touchscreen/embedded Linux development. Early in his career he became involved in what is now called cross platform development. Given the dearth of useful books on the subject he ventured into the world of professional author in 1995 writing the first of the "Zinc It!" book series for John Gordon Burke Publisher, Inc.

A decade later he released a massive (nearly 800 pages) tome "The Minimum You Need to Know to Be an OpenVMS Application Developer" which tried to encapsulate the essential skills gained over what was nearly a 20 year career at that point. From there "The Minimum You Need to Know" book series was born.

Three years later he wrote his first novel "Infinite Exposure" which got much notice from people involved in the banking and financial security worlds. Some of the attacks predicted in that book have since come to pass. While it was not originally intended to be a trilogy, it became the first book of "The Earth That Was" trilogy:
Infinite Exposure
Lesedi - The Greatest Lie Ever Told
John Smith - Last Known Survivor of the Microsoft Wars

When he is not consulting Roland Hughes posts about technology and sometimes politics on his blog. He also has regularly scheduled Sunday posts appearing on the Interesting Authors blog.

Leave a Reply