Long Hours Kill You

The WHO (World Health Organization) recently published a study claiming working long hours will kill you. I agree, though I routinely do it.

Here’s the difference for me. I routinely work 80-90+ hour weeks when on a project. This is an on-site project far away from home. I don’t do touristy things. I don’t often, if ever, hang out with coworkers on weekends. I’m there to bank as much money as possible.

At the end of my contract I go back to the family farm to decompress. Sometimes I stay here three months, sometimes it is close to two years. You can work 80-90 hour weeks, but you have to lead a semi-retired kind of life. Long breaks between projects. Only take projects you find interesting. You don’t even realize you are working like a slave when you truly enjoy what you are doing or are simply fascinated by it.

I’ve blogged before about how technical recruiters cannot understand a semi-retired life consultant. They are using to low wage slaves who can’t miss more than one paycheck without being on the street. When your bill rate is high enough and you are working 80-90 hour weeks, you can take a lot of time off, if you live cheap.


I talk quite a bit about this topic in my latest book. In particular you probably want to read Karoshi – Do More With Less. There are other essays and conversations about the people I know of who died in IT. Some of them I personally knew. Others I came to the client site after the death. Others were just local lore about managers killing themselves in the office to buy their development team more time to complete the project.

I kid you not.

Management at most companies seems to want at least 60 hours per week. You can read this lengthy thread on Quora if you don’t believe that.

If you grew up on a farm you can generally work all the time. At least if you grew up on a farm when I did because that’s all there was to do. We had three television stations; five if the weather was perfect and we turned the antenna. There was no air conditioning and no Internet. You could sweat while reading a book or you could sweat while working. At least when you were working you were moving.

Featured image by Miguel Á. Padriñán from Pixabay

Business/Enterprise Class Computing

We now have a generation of kids who never worked on real computers, only x86 platforms; so Business Class Computing needs to be explained. This all started with an exchange I had on the qt-interest list with someone I respect.

-Text isn’t a stream.

Katepart would disagree.

part of the exchange, their response to my previous message

I run into this a lot when people have only worked on x86 based platforms or Unix. They don’t know what constitutes a Business/Enterprise class system or why. Some believe if you string a bunch of AWS modules together and run your enterprise on it then it must be a Business Class system. Nothing could be further from the truth.

We will start with some pictures to ease you into the conversation. This is a Class 8 truck.

Class 8 truck
What many of you think of when you hear truck

The Toyota Tacoma is generally considered a Class 1 truck. A light truck like this is what many of you think of when someone says “truck.” Yes, some of you will think of Class 2a like the Ford F-150 or the Class 2b like the Ford F-250 or Chevrolet Silverado 2500. The point is that most of the x86 platforms are one of these truck classes, and Business Class computer platforms are the Class 8 lines.

Yes, as long as something could be put into boxes that fit into the bed you could probably transport it with one of these lower class trucks, but should you? The answer to that question will probably become clear with this image.

FedEx truck and tanker truck on Interstate

Just how many little trucks would have to be on the road to keep your local gas station supplied with fuel? How cheap do you think FedEx (or anyone else’s) overnight delivery would be if they were limited to what could fit in the back of a light duty truck? Just how soggy do you want your packages? To round out the discussion and stop all of the “but but but” chatter, just how far do you think your light or medium duty pick-up will get hauling this?

Oversized load on truck with air tag

No, your eyes aren’t deceiving you. The tractor has an air-tag axle in front of the tandem drive axles. It only gets put on the ground when they are hauling something really heavy. This load obviously doesn’t qualify. Not that it matters for this discussion, but highway rules and regulations cap the maximum per axle weight, even with a permit, because roads and bridges simply can’t take much more. The only way to haul something really heavy is to put a lot of axles under it.

Computing platforms are not much different than the truck world; there are just fewer classes. Previously, we had the home hobby (x86), midrange (VAX, HP, AS/400, etc.), and the mainframe (IBM 360/370, Unisys, Amdahl). There were many makers, those are just examples, not intended to be a complete list.

Business Class Computing Differentiation

OS Understands Logical and Physical Record

Lots of people try to spin everything so that the x86 platforms can be considered Business Class Computing. As of this writing they cannot achieve the class. Oh yes, you have N-times the floating point calculation speed of a Cray; Y-times the memory and I/O capabilities of the VAX 11/780 that was used to feed work into said Cray; and some other multiplier of some other hardware point. None of that matters. You don’t yet have a business class operating system.

I know Windows and Linux fans are wailing at such a statement, but it is a simple fact. We will explore that fact in this post.

A Business Class operating system is required to provide and support both the logical and physical definition of a record.

This doesn’t mean simulated with streams or any other hack you will find on the current x86 based operating systems. The definition of a record is the foundation for all other business class functionality. This is how you do locking; have indexed files; and have file journalling. This is how you get something like MQ Series to restart after a hard system crash due to power and automatically re-dispatch the messages it had dispatched at the time of the crash. This is how robust systems work.

OS Provides At Least One Native Indexed File Type

Indexed files are still numerous in their existence. While “new” development should avoid them, they still must be maintained. If you don’t have a relational database on your box, then they are state of the art. You just have to be careful not to lock yourself in. I talk a lot about the multi-typed record in this book.

The Minimum You Need to Know to Be an OpenVMS Application Developer

It was the norm back in the day.

 Key 0:
     Order number    char 10        15 in systems written later.
     Rec_type          char 2
     Sequence_no       char 2         Sometimes called line number
 Generic map with filler at the end for some amount.

 Record Type
 10        Invoice header
 20        Bill to information
 30        Ship to information
 40        Carrier information
 60        Invoice detail
 61        Detail comment
 62        Credit or discount line
 70        Credit or discount summary
 80        Invoice summary

This is a typical example of a multi-typed record for an order file. Not a full record obviously. The primary key started with some character based order number followed by a two character record type followed by a sequence number. Depending on the file type the sequence number (usually character as well) could be either two or three characters. It depended on how many “comment” type lines were allowed on the thing. Usually 01 – 99 was more than enough for most applications.

Why was this design used? It was incredibly fast. You did a keyed hit to the 10 record for a specific invoice then sequentially read until the invoice number changed or you hit end of file. When you are building an order entry screen that has has the bill-to, ship-to, etc. at the top and a limited scrolling region for detail lines, this perfect. Keep in mind these were green screens.

green screen example

You would just have a field for Vendor/Customer ID. You would navigate to it and hit some key combination to bring up a screen like the one above.

This was amazing. It was fantastic. This was a trap. The sheer amount of code written for these types of files made it almost impossible to bring in relational databases. For companies that never grew beyond the limitations of the indexed file system it was okay. Everybody else eventually had to bite a really big bullet.

OS Provides Record/File and Other Resource Sharing

These platforms were designed to be multi-user from the start. Even back when PDP 11 machines maxed out at 2 MEG of RAM and every process had to fit (with the OS) inside 64K Words (it was a word, not byte, addressed machine) we still handled more than 60 simultaneous users plus various batch jobs. Here is an educational and entertaining side trip.

Because these operating systems were designed with business in mind they considered the need for 40+ data entry clerks all having terminals running the same order entry application writing records to the same file. Initially that file was a sequential transaction file that was periodically closed and fed to a batch job for processing. Every clerk was appending records to it, then required to log off and take a break.

Data Entry

If you can’t comprehend having a room full of data entry clerks manually keying orders in get yourself a copy of this book and read up on IT history. In particular you want to pay attention to “Please allow 6-8 weeks for delivery.”

These operating systems could allow multiple users into indexed files, but disk was incredibly expensive. The transaction files were originally punched cards and data entry was a keypunch operator.

Keypunch machine with operator

Later things went to paper tape from a terminal of some sorts. Eventually that went to magnetic tape. All still a batch transaction file to be fed into one or more master files.

The whole transaction file batched into master files architecture started going away as companies found they could afford more disk. Now data entry was a terminal writing directly into the master indexed files.

IBM 3270 terminal

Don’t get fixated on the phrase “a terminal.” It was one or more rooms full of operators at terminals. The typewriters just got changed out for computer terminals.

It was nothing to see 40 people (mostly women) in a room performing data/order entry. Every one of them entering orders into the same master file. The records management system provided all of the record locking and I/O. Depending on the indexed file type and the platform it could also dynamically expand the file.

Languages Work Together on Business Class Computing Platforms







      1        FILE=DRAWING_DATA,
      2        STATUS='OLD',
      3        ORGANIZATION='INDEXED',
      4        ACCESS='KEYED',
      5        RECORDTYPE='FIXED',
      6        FORM='UNFORMATTED',
      7        RECL=K_DRAWING_RECORD_SIZE/4,
      9        KEY=(1:8:CHARACTER),
      1        DISP='KEEP',
      2        IOSTAT=L_DRAW_STAT,
      3        SHARED,
      4        ERR=999)

Above the line of ==== is a snippet of COBOL from a program found in this book. Below it is some FORTRAN from the same book. While it may not be obvious to you, both of these programs operate on the same file and can do so at the same time. This is because the records management system provides the definition of the file and all access goes through the records management system.

Languages Required to Support Indexed Files

COBOL, FORTRAN, and most other languages for business class computers had/have standards mandating support for indexed files. Said support is usually somewhat generic so it doesn’t favor just one OS.

You may not have guessed this if you only worked on x86 based platforms, but language specifications of many languages actually require indexed file support. One of the reasons it has taken so long for a “free” COBOL compiler on Linux is that Gnu COBOL had to find an indexed file library that had all of the functionality required by the language specification. They settled on Berkley DB but it is behind a login screen with Oracle. You can read more about that at this link.

Common Calling Standard

Here’s where the x86 platform really falls apart. For the most part it lacks a common calling standard. This is also where some wiggle room was granted in the language specifications. Technically there is a FORTRAN calling standard, COBOL calling standard, DIBOL calling standard, insert-name-here calling standard. That means that a function or subroutine written in language-x is required to have a certain interface. A given method of arranging/receiving parameters, points of entry, points of exits, and methods of returning values; if you will.

On business class computing platforms you will find they try to respect that desire, but they tend to create a universal calling standard. This is how COBOL can call FORTRAN passing an array as a parameter even though FORTRAN stores arrays in a completely different manner.

When it comes to languages like C/C++ that like to pass things via pointer you can get into all kinds of trouble. To get around such trouble VMS (and probably other platforms) pass parameters by Descriptor. This is a well documented structure that contains all kinds of information about the string, array, custom object, whatever, along with the address of said object. This allows for under the hood “glue code” to re-arrange data if needed.

Try writing a system using six different languages on Linux. Don’t cheat by making each language it’s own stand-alone program. Have one program calling library routines written in five other languages. As an additional restriction don’t use transpilers that convert all of the source to C/C++ and compile that. You haven’t gotten any of the language’s benefit and you haven’t really completed the test.

Try doing the same thing on Windows and don’t cheat using DOT-NET.


Business Class Computing platforms provide a Records Management System that puts the multi in multi-user. It’s not some hokey SHARE hack like DOS had. It’s not something you can bolt on at a later date either. The kernel has to use and rely on the records management system.

Business Class Computing platforms generally provide a common calling standard. You can write your libraries and applications in as many of the languages supported on the platform as you want as long as the compiler works with the common calling standard.

x86 based systems generally don’t do this. OpenVMS is currently being ported to the x86 platform so soon there will be one business class operating system on the architecture.

For more interesting reading along these lines check out my latest book.

Edit: a few minutes after publishing

When Bill Gates was working on DOS he was working on an OS for a personal computer. There was very little storage and little memory. No thought was given to 64+ people trying to use it all at once.

When Ritchie was creating first C then Unix he was using a PDP. Every operating system I ever used on the PDP had a records management system. It wasn’t that he didn’t have exposure to such things. He was writing an operating system for a telecom switch and just wanted multiple people and processes to be able to happen. It was never supposed to get out into the wild.

When Linus Torvalds was creating what we now call Linux he was creating a “free” Unix like operating system for ordinary people. Even today Linux really only supports streams. You have to cobble together other things like PostgreSQL, Berkley DB, etc. if you want multiple users in the same data at the same time. Yes, there is a difference between a journaling file system and file journaling. File level journaling is done to participate in transactions, usually across multiple files.

Yet Another Reason to Ditch Facebook

I will never understand why people join Facebook or why they don’t ditch Facebook. You should never post that much about yourself online. “Oh no, I have private pages” is the usual response.

Well, 553 Million users just had their personal information posted on a hacker site. Yes, your information. If you are a user you should expect you are part of the 500 million.

Yes, I’ve posted about security breaches before. They happen all of the time. Unless they impact tens of millions of people they don’t even make the news anymore. Most of you probably don’t even remember when I wrote about the TJ MAXX breach? That seems so quaint these days.

Featured image by Tumisu from Pixabay

You people have to realize something. “Move quickly and break things” means security will be one of the broken things. The motto isn’t “Move quickly and only break that stuff over there.”

AGILE is nothing but hacking on the fly.

There can never be justification for using it when you expect people to pay money or trust what you create.

Let’s just see if the court system lets Facebook get away with “a year of identity theft monitoring” or if the Justice Department finally gets some teeth.

Protect yourself, ditch Facebook.

What surprised you the most in your career as a software engineer?

The race to the bottom.

I started in the early 1980s. Employers championed tight code that worked well. During the late 1980s till today there has been a constant dumbing down of software development. Hacking on the fly was always shunned. Now it is called AGILE and expecting people to pay for a hand polished turd is standard business practice.

Software quality has fallen through the floor and it is gaining speed as it heads toward the planetary core. Patients are dying wholesale from medical devices developed using AGILE and nobody is going to prison for it. The 737-Max is a shining example of why you should never use AGILE and I’m willing to bet nobody from Boeing will go to prison for it. (I’m also doubtful that plane will ever be allowed to fly commercially again.)

Nobody really considers just how dangerous connecting every thing to the Internet really is. Consumer press is championing IoT despite constant reports of hackers taking over the things and using them for BOTnets. Most of the people developing the things aren’t highly skilled either. This adds to the problem.

The circle.

When I started, IT workers were well paid. You started out around $20K (which was a lot then) and within 3 years were getting paid north of $80K. After 5 years you were making north of $180K with bennies. With dumbing down and anyone who read a single “Teach Yourself How to Be Totally Useless in 21 Days or less” calling themselves a programmer, employers have went back to trying to get developers with 10+ years of experience for less than $80K.

The dehumanization of IT.

Through the 1960s – early 1980s, after you graduated from college (once there were college courses) you got hired into a firm with a bunch of other trainees. There was a formal training class teaching you how to develop software for that company using the home grown routines and libraries they had. At the end some where hired as full time coders and the others were sent down the road.

Today the primary “skill” most employers want is a willingness to work for absolutely no money. I worked for a client that used an off-shore team. The team couldn’t code. The system they delivered was a tragedy causing millions in financial loss. Still, they used them. Why? They worked for $10/day.

Companies used to take it upon themselves to make the IT people they needed. Now, they want to buy exactly what they need off-the-shelf for absolutely no money and once the project is done kick them to the curb all in the name of the bottom line.

What made IT work through the 1980s and 1990s was institutional knowledge. An IT department learned how the business operated and would take that into account when asked to develop a new system or make a modification. Now, IT workers have no institutional knowledge. I’ve seen automotive parts ordering systems developed that didn’t even have any code to handle core charges. I’ve seen order processing systems written that never took into account sales taxes. I’ve seen others that understood exactly one sales tax. I’ve seen payroll systems that didn’t have any means of handling a wage garnishment. The list goes on and on.

Iowa Caucus 2020 – AGILE’s Third Mega-Failure

Because AGILE allows companies to commit SOX accounting fraud companies are adopting it in droves. Because it is yet another name for hacking on the fly without a plan kids, not professionals, kids, love it.

No. Getting paid to do something doesn’t make you a professional. Winning $20 shooting hoops in the park does not make you a member of the NBA and neither does getting paid to hack on the fly without a plan.

A bucket of user stories for the current sprint does not in any way shape or form qualify as a plan.

Mega Failure #1 – HealthCare.gov

Some of you are probably too young to remember when Healthcare.gov first went live. It was a catastrophe. The fraud masters fed stories to New Yorker, Washington Post, and MedCity News proclaiming HealthCare.gov would not have failed had it employed AGILE methodologies. To date I’ve not seen any print a retraction. Certainly not a retraction which gets echoed across the Internet. I must admit the old links to MedCity’s article no longer work so perhaps they fell on the sword that much.

Government Computing News called these bastions of journalism out. HealthCare.gov failed because of AGILE. What the proponents of AGILE fail to admit is that systems exist which are too big to AGILE. Some of those systems fit in your hand or into a small box hanging on the wall.

Waterfall was created for a reason. You have to define the scope and direction of a project. You can’t just hack on the fly until the money runs out which is the current Silicon Valley Startup mentality. Waterfall is how you can get a team of people to walk from Chicago, IL to Kansas City, MO without them drowning in Lake Michigan. Agile lets them drown and you iterate with a new team.

Mega Failure #2 – 737Max

I haven’t physically confirmed this and most importantly, nobody is denying it. You can do a Web search for “Boeing c++ agile” and, depending on the time of year, find Boeing with a lot of “agile” developer jobs. It also appears they’ve moved quite a bit of IT to India.

boeing agile jobs image


I have speculated on this in a March blog post. More and more are starting to support it. Some very familiar with crash investigations are supporting the idea that a stall control system which did not allow the yoke to override it per industry standard is a red flag of AGILE. When you don’t have a system architect and you don’t have The Four Holy Documents written up front, you end up with a catastrophe like this.

Hacking on the fly to a bunch of “user stories” is not software engineering. It is so far from software engineering that it cannot even mail a letter to software engineering.


Mega Failure #3 – The 2020 Iowa Caucus

We don’t even have to speculate here. Kids hacking out phone apps love AGILE. It lets them hack on the fly, write their own tests to prove their code works, and makes them feel professional. Too bad they aren’t professional and the tests rarely prove anything. Most are just there to check a box.

Too big to AGILE can be a system small enough to fit in your hand. iDiot phone developers never realize that. (The “i” in iPhone really stands for iDiot. Who else would spend a thousand dollars or more for a few hundred dollars in parts.)

An independent test team working from The Four Holy Documents when developing an overall test plan would have found this “coding error” that still as of 4:29pm on February 5th can’t give us the final numbers.

The Four Holy Documents

  1. Business Requirements Document (BRD)
  2. System Requirements Document (SRD)
  3. System Architecture Document (SAD; a.k.a. System Architecture Specification or SAS)
  4. System Specification Document (SSD; a.k.a. Functional Specification; or System Functional Specification – SFS; or System Design Specification – SDS)

It is time for the Federal government to ban AGILE in all industries. While the Iowa Caucus may be found to be funny, the 737Max and HealthCare.gov certainly weren’t.