Why There is So Much Tech in Ag Today

sprayer navigation systemThe short answer is some people are too stupid to be farmers. True, nobody wants to hear that, but, it’s a fact none the less. The stupidity progresses all the way up to the EPA and USDA where some people are too stupid to hold jobs. It is sad, but it is true.

You can forget about all of the “fake news” spewed to you about GMOs. That’s just heroin to get the masses high and the masses are addicted to such fake news. Many of these same masses will be booking a trip to China soon to get themselves a genetically modified baby and think nothing of it.

The longer and more complete answer goes back many decades when the Keller MBA mentality was allowed to ooze its way into agriculture while nobody was looking. That’s when the phrases “quick win” and “economies of scale” started winning out over the long term health of the land. The healthy soil movement is only just now starting to creep back into agriculture. A big part of this dumbing down of agriculture came when kids opted to not farm and land started getting managed by people with a quarterly profit mindset.

Allow yourself to drift back to the days when I was a child. Yes, I really was a child at one time. We didn’t have squat for weed control in soybeans. Weed control consisted of a few pickup truck loads of kids with hooks, hoes, or just bare hands taking a few rows at a time walking from one end to the other and back again until all of the weeds in the field were eliminated. Grass we could do nothing about. To help control weeds farmers actually rotated crops. This year a field was soybeans. Next year it was corn. There was none of this corn back on corn for 30+ years like you hear about in Iowa today.

Part of the reason behind crop rotation was weed control, another part insect control, and the final reason was to give the land a break. Soybeans didn’t stress soil like corn. Those underground insects like root worms and such would starve to death when they woke up in the spring without any corn roots to eat. Ever since George Washington Carver came up with crop rotation this was how it was done. Oh sure, the two crops may differ from region to region, but, this was how we did things.

Grass was just something nasty though. It would stay green long after soybeans were ready for harvest and you just couldn’t get a sickle to cut through it without breaking something. If you did manage that feat a big wad of it would plug up the combine and you got to spend hours digging that out then fixing what the wad broke.

Along came a chemical called Dual. It was called Dual because it would control grass on both corn and soybeans.

Yeah, bad idea.

This is where economies of scale first started creeping into farm thinking. It was cheaper if you bought and used more of it. Since that was the cheapest form of grass control (and pretty much the only form for soybeans at the time) farmers flocked to that like flies to shit. Nobody at the EPA bothered to think about the reality.

If you use the same chemical year after year on all of your land the thing it was intended to kill becomes immune to it.

Approving this chemical for use on corn violated the rotation principle. Chemicals you could use on corn couldn’t be used on soybeans previously so the weeds didn’t face the same thing in back to back years.

Fast forward to the Clarence Thomas era. Yeah, big chemical companies with fat checkbooks and lots of lobbyists also help get former employees onto the Supreme Court and decisions magically go their way.

Round-up Ready” soybeans came first around 1996. Honestly they were a Godsend. By that time, unless you happened to have 20 kids of your own, you couldn’t get enough kids to walk soybeans. Hot sun, insect bites and low pay had them all trying for jobs in shopping malls or at McDonald’s. As more and more farm families left the business and fences were pulled, fields were getting bigger too. While 4 people could walk a 20-4 acre field in an afternoon if the weeds weren’t too bad, a 200+ acre field left you with no sense of accomplishment. You would be going back to the same field for days seeing the marker flag move no where near far enough. There was no sense of accomplishment. Trust me. I grew up on a farm and had such a pleasure. Still live on that same farm.

Insatiable greed pushed Monsanto to introduce Round-up Ready corn in 1998. Here is where you either have to believe that bribes and lobbying won the day or, that everyone working at the EPA and USDA involved in the approval process are simply too stupid to have jobs. I mean they can’t even flip burgers at McDonald’s.

Guess what happened?

Economies of scale and Keller MBA level stupidity once again assaulted agriculture. If you paid the premium for Round-up ready soybeans and Round-up ready corn you could get weed control for around $7/acre. It was the cheapest, so that’s what got done. Nobody bothered to think about the looming catastrophe. Certainly not the regulators whose job it is to stop such catastrophe’s.

Before you go shitting on the farmers for not thinking I have to ask one question.

How many of you bought antibacterial hand soap and contributed to the rise of antibiotic resistant super bugs? If you bought and used the soap, you’ve got a share of the blame. Regulator agencies should not have allowed that to be sold to the general public. A day late and billions of dollars short, the FDA finally did its job. The farmers only created weeds you once again had to cut with a hook to kills. You helped create something which can and very well may wipe out humanity.

Guess what? We now have a growing list of Round-up resistant weeds. 24 in North America and 41 world wide according to this chart. Those numbers are sure to change.

Loooong before Monsanto created the Round-up fiasco, they created another catastrophe`. This catastrophe` was called Dicamba. This super weed killer briefly existed in Ag at a time when I was too young to remember it. In my part of the world it didn’t make it two growing seasons. You see, back then farms really were family farms. Your kids went to school with the next farm’s kids and spraying something which would drift to destroy their crops was a thing you just didn’t do. Someone who worked at our local chemical company reminded me of this last summer. They had actually sprayed it for one season and wouldn’t handle it the next. There was no way to control it. Even if you didn’t get sued (people didn’t sue back then) the lost business would put you out of business.

Guess what?

The same stupid farmers who sprayed glysophate non-stop to create these super weeds wanted the chemical companies to give them an economies of scale resolution. Dicamba ready soybeans hit the market. Why? Because why come up with something new when you can dust off a failure for a profit?

Spraying technology has improved dramatically, but, as that link will show, Dicamba is still an uncontrollable failure. Today we have GPS guided spraying systems like the little dash mount thing at the top of this post. Soon, if not already, as part of licensing, those things are going to be required to record the entire spray pattern to removable media or cloud storage using an Internet controlled timestamp. Before spraying you are going to be required to enter what is in the tank and the crop you are spraying and the rate. All of this will be required to be sent to the USDA, EPA or some other agency. Spray systems will be required to be inoperable until they have this information and their Internet link.

Why?

Because Monsanto has a fat checkbook and lots of lobbyists. They are going to keep the Dicamba catastrophe on the market by funding as many elections as necessary. It doesn’t matter that it is the wrong thing to do. That fact doesn’t enter into Keller MBA think.

A TCP/IP Software Appliance

In the very near future, every viable business class operating system will incorporate a TCP/IP Software Appliance. This is not a firewall. What we have today serving as firewalls may or may not server any purpose in the future, but one thing is for certain, we cannot solve our security problems via any hacks to our existing socket and IP libraries nor can security be improved by any future tweaks to SSL/TLS. I have been hearing for some time now that TLS has been breached at the architectural level. I don’t know people high enough up to share any solid information other than to tell me TLS hasn’t been secure for a very long time.

We have a perfect storm creating this security problem and I have been bringing it up on various Usenet newsgroups. Worthless secondary education institutions, even more worthless MBAs being churned out by MBA mills like Keller, a general business mindset focusing entirely on this quarter’s numbers and a judicial system which doesn’t corporate arch villains in prison. (Just how many Wall Street CEOs and board members went to prison over the mortgage fraud scandal which pulled north of a trillion dollars from the global economy? Just how many people in Wells Fargo upper management went to prison for opening a couple million fake accounts without customer knowledge1, in many cases ruining the customer’s credit rating?)

Some of the people arguing with me were at one time college professors who themselves are a large part of the problem. Most colleges have become profit driven businesses willing to put the lowest cost body in a chair in front of students whose parents and/or government are paying the full tuition fee.

Oh come on, you’ve all heard the news reports. In order to generate revenue colleges are handing out grants and scholarships to students whose parents can pay for college, or at least most of it instead of the kids whose parents spent their entire lives working for minimum wage. They’ve learned how to squeeze profits out of scholarship dollars.

If a college has a grant program with $100,000 to give, it can give a full ride to one deserving child of minimum wage parentage generating no revenue for the college or it can give $5,000 to 20 students whose parents could pay for college if they had to. Let’s also assume $100,000 gets you through a 4 year degree covering books, tuition and dorm. If they give it to the highly intelligent and deserving child of minimum wage parents, they generate no revenue. Spreading it out across 20 well off students brings in 20 * $95,000 = $1,900,000. Even non-profit state run colleges are for-profit. They just have to spend that money on executive salaries and football stadiums to remain non-profit.

Grant programs are big business for colleges and universities. You make even more money by putting instructors who are “priced right” in front of students instead of instructors who actually know anything. Think I’m kidding? Lovie Smith’s contract approved by Illinois trusties could pay him up to $29 Million with incentives2. Why don’t you research just how much they pay instructors teaching COBOL and relational databases?

Anyone who disbelieves that DeVry and Keller are shit schools needs to consider several facts. Fact 1, I am a DeVry alum. Thankfully I went to a high quality junior college first because I learned basically nothing at DeVry other than the fact DeVry sold my financial aid information to credit card companies before I even started classes. Yes, I took a full time job and had my own place to live. Less than two weeks into my living there a Visa application with my name on it showed up. When I say “place to live” I don’t mean an apartment. I mean I was renting out the in-law’s apartment in the attic of a bungalow, not an apartment in a complex. At that time my parents didn’t even have my mailing address because I had just sent them the letter with all my official contact information the day before. Oh, I also learned about student loan debt and how to work a full time night shift then attend classes during the day.

DeVry changed hands several times to lesser and lesser forms of life. I didn’t follow the sorry history of that “educational institution” but I did read this 2017 article stating the current owners had to inject cash into the schools before they could give them to the new owner3. That’s right, they had to pay the next lower life form to take them off their hands after having done such a superfabulous job of running them, squeezing every nickel out rather than building something someone could be proud of.

In case there is one person in the universe reading this who doesn’t believe businesses are hyper focused on short term gains to the point of sacrificing all future revenue, I’ll just refer you to this article in the Atlantic4. I can also point you to this article where Warren Buffet, one of the most respected business minds of our time has called for an end to the quarterly focus5.

Circling in the wall of this storm were “industry analysts” paid to commit fraud on a regular basis. They were paid to whisper in the ear of upper management saying “open good, proprietary bad.” Since they are marketing shills paid to commit fraud instead of actual industry analysts, not one of them bothered to think about security. All they knew was that Syphilis Willie Clinton was promising at the height of his #MeToo violations to spend our tax dollars to create the Information Super Highway making the world a Global Village without a Global Village Council to manage it and they wanted in.

At the crux of this issue are the Linux socket and IP libraries. Real operating system vendors had been focused on highly secure proprietary networking using proprietary and pricey networking hardware. This “open” thing meant using completely insecure software and cheap hardware. It rankled to say the least. Of course the blatant criminal fraud of “industry analysts” branding Microsoft operating systems, the most proprietary operating systems in the world, as “open.”

Even if you are a non-technical person who can barely operate a flip phone, you’ve heard about data breaches leading to massive identity theft. These breaches happen in large part because there is no way to secure *nix based IP communications. The simple reality is that the complete anarchy of *nix based IP libraries and applications means there is absolutely no way to know for certain all of the IP ports your application uses. After you scour hundreds of directories looking for cryptic text based configuration files, you still can’t be certain that is all the ports your applications use unless you read each and every line of the application and all supporting libraries yourself.

The simple fact is that *nix does it wrong and every platform which copied the *nix libraries in order to be “open” has also done it wrong.

*nix, and in particular Linux, grew without the slightest input from an architect as to its design. Much of the code was/is hacked out by 12 year old boys who wrote something because they thought it would be kewl.

The TCP/IP library and to some extend the sockets library grew like mold. No planning and no thought what-so-ever to security in an OS developed in complete anarchy.

The bulk of today’s security breaches/mass identity thefts are a direct result of said growth of mold. __ANY__ application can open a port and communicate to the outside world. There is virtually no control and even if you manage to find all of the configuration scripts for package-a, unless you look at the code you cannot be certain that is all the ports it uses.

In a scant few years, platforms which do not totally abandon the *nix sockets and IP libraries will become “non-strategic” in Gartner speak. The financial and criminal penalties are being raised world wide even now. The GDPR is just the beginning6.

Carrying with it fines 20 million Euro or 4% of gross income, whichever is greater is a great way for broke governments to balance the books without angering taxpayers.. Other countries will be following suit in just a few years, if for no other reason than to stand in line to get a check after the EU prosecutes some corporation.

While I disagree with the last bullet on slide 17 of this presentation7, page 19 makes a good point fingering AT&T. This is where implementation went off the rails.

From what I’ve read both IBM and Unisys have went down the TCP/IP Software Appliance road. A central point all programs must connect with to communicate on the network. This point built into the OS in such a way that no application can open their own little IP socket. Not something blocked with a priv which can be gotten around, the capability has been physically removed.

I had occasion to revisit some information in my award winning Service Oriented Architecture book8. Around page 150 I had the entry for a service I created as part of the book. You see, DEC (Digital Equipment Corporation) was decades ahead of the curve. They started down the path of a TCP/IP Software Appliance. One central place to configure and provide all IP services. The original intent was that no application would have direct access to the network. All applications would have to connect with services defined within the application. They had the inbound side of this almost perfect before the big push for “open” (read that as insecure) standards.

When defining an inbound (receiving) communication service you flagged it as a “listener.” You also set a limit to the number of service instances which could be active at any one time. When a new connection comes in the TCP/IP Software Appliance looks for an active yet idle service to assign the communication to. When it doesn’t find one it checks the current active account against the limit. (This gives you throttle control so one service cannot eat your box.) If you are below the limit it spins up a new service to handle the communication, otherwise that connection request waits until something frees up.

The page in this book isn’t wide enough to provide a good screen shot so here is a screen scrape:

Service: MY_H_SERVICE
                           State:     Enabled
Port:             4445     Protocol:  TCP             Address:  0.0.0.0
Inactivity:          5     User_name: HUGHES          Process:  MY_H_SERVICE
Limit:               5     Active:        0           Peak:         0
 
File:         DEV_DSK:[HUGHES]MY_H_SERVICE.COM
Flags:        Listen
 
Socket Opts:  Rcheck Scheck
 Receive:            0     Send:               0
 
Log Opts:     None
 File:        not defined
 
Security
 Reject msg:  not defined
 Accept host: 0.0.0.0
 Accept netw: 0.0.0.0

You will notice I also highlighted the two Accept lines at the end. Each service can define a list of hosts and networks which can use it. This is a night and day contrast with the hosts file on Linux. Each service choses what can connect with it and it is all in one simple location with a pretty complete tool to maintain.

Admittedly, this was a baby step application near the beginning of the book. If you are interested in the entire application please legally obtain a copy of “The Minimum You Need to Know About Service Oriented Architecture.”

 $ type sys$login:my_h_service.com
$ lf[0:7] = 0x0A
$ cr[0:7] = 0x0D
$!
$ open/read/write net sys$net
$ write net "<HTML>"
$ write net "<HEAD>"
$ write net "<TITLE>OpenVMS</TITLE>"
$ write net "<BODY>"
$ write net " "
$ write net "<p style=""font-size:150%"" >"
$ write net "Providing port services before there was SOA</p>"
$ write net "<p><B>How do you like those apples?</B></p>"
$ write net "</BODY>"
$ write net "</HTML>"
$ write net "Just some ordinary text"
$ exit

A bit later in the book on a different service I ahd a BASIC program which could be spun up and interact with the port. Here are a couple of interesting snippets.

   OPEN "SYS$NET" AS FILE #net_chan%,      &
        MAP PORTINMAP
...

 A930_USER_INPUT:
930 L_TRY_COUNT = 0%
    WHEN ERROR IN
        PRINT "Reading input"
        L_TRY_COUNT = L_TRY_COUNT + 1%
        L_ERR% = 0%
        GET #net_chan%
    USE
        IF L_TRY_COUNT < 1000%
        THEN
            SLEEP 1%
            PRINT "Trying again"
            RETRY
        ELSE
            L_ERR% = ERR
            PRINT "Tried 1000 times to read from internet"
            PRINT "Quiting with error "; L_ERR%
        END IF
    END WHEN

    RETURN

When the program providing the service is launched by the TCP/IP Software Appliance it executes in a process where the logical SYS$NET is defined to be the stream servicing the network communication. You open it just like any other file/stream.

The second snippet just shows the loop which tries up to 1000 times to read from the stream. This is 1000 times per call of the subroutine, not 1000 times total.

You should notice the application has no concept of transport layer security. It has no concept of networks or the Internet. Why? Because all of this must be done by the TCP/IP Software Appliance. No application should ever have any concept it is communicating over the Internet or local network. The only security the application should know about is application level security, be that message encryption, secondary user authentication, or some other thing we have yet to define which has nothing to do with transportation.

Please take a moment to look back at the definition for MY_H_SERVICE. Notice that last major heading: Security.

For connections VMS applications don’t initiate, VMS did it correctly. TCPIP itself just needs a few tweaks. For existing Listener type services it needs:

/SECURITY=NONE, TLS, whatever

/CREDENTIALS=(TYPE=TLS, SOURCE=blah, …)

/whatever_other_supported_transport_security_data_needed

TCPIP itself should be handling all of the transport layer security. The TLS stuff could even be added to the /FLAGS if that made life easier. There is already proxy stuff there.

Additionally, to support outbound only communications it needs

/NOFILE

/FLAGS=(Writer) – which turns off Listener

The combination of these two (plus whatever security) would create a service on a port which refused all inbound connections but could be utilized via either a LIB$ or SYS$ call from descriptor based languages.

LIB$GET_WRITER_SERVICE( SERVICE_NAME by DESC,
                        DEST_HOST_NAME by DESC,
                        DEST_HOST_SERVICE by DESC,
                        DEST_PORT by DESC optional,
                        LOGICAL_NAME by DESC optional)

The port would be needed to support IPv4 services without names. The logical name would be a process level logical to assign the value. If not provided should default to SYS$NET_OUT. Well, I assume SYS$NET is process level, if job level, fine.

Every descriptor based language which needs to initiate outbound communications could just call this, completely oblivious to the transport layer security and upon success,

     OPEN logical_name$ FOR OUTPUT 
				AS FILE # rpt_chan%

In case I lost the PC crowd, most real operating systems pass parameters by descriptor. It’s a pointer to a structure which tells both sides what the data type of the parameter is, it’s location, size, organization, etc. There is one common base structure which matches the beginning of all larger descriptor structures. The type of the descriptor defines the size of the descriptor. This is why you can write subroutines/functions in C, BASIC, COBOL, FORTRAN, etc. and call them from each other without having to do anything kinky in the code. There is no chance of overrun or missing null terminators or any of the other penetration techniques you hear about on lessor operating systems.

Now, if there are technical reasons GET_WRITER_SERVICE needs to be SYS$ instead of LIB$, that is fine. What matters is that an application should have no knowledge of transport layer security and no ability to create its own connection. If security needs to switch from NONE to TLS to LEFT-HANDED-MONKEY-WRENCHS, so be it. The service definition changes and the application goes merrily on its way.

While much of this conversation has occurred from the VMS operating system point of view, it is what every operating system must do. Now that the EU Global Privacy Law9 is a reality it won’t take long for the simple wording of the law to be interpreted by the courts as

If you have a breach which wasn’t someone writing a password on a yellow sticky, you didn’t take adequate security measures.

In under three years there will be two classes of operating systems in data centers.

  1. Operating systems with a built in TCP/IP Software Appliance.
  2. Operating systems the business is quickly divesting itself of due to legal liabilities.

I’m not really good with the Dia drawing tool. I wanted to put the programs and appliance in a box but couldn’t make it work. In short, this is how the flow must go in the future once all viable operating systems implement the TCP/IP Software Appliance.

All applications will establish connection with the software appliance utilizing whatever services it has defined. The software appliance will handle all transport layer security which may also include first level user validation. The applications themselves will have no knowledge of the network.

What I mean by first level user validation is that Security heading in the configuration may specify a user database and handshaking method where, upon connection attempt, the outside world will pass in a username and password in some agreed upon encrypted format. The actual service or program on the other side may have additional username and password security which must also succeed before full communication occurs.

Your application simply does what it does. What is on the other side of the software appliance does not matter. That is the responsibility of the TCPIP package. Whether it is no security, the ever insecure TLS, or the new, not yet identified as insecure security plug-in, or the next not yet identified as insecure security plug-in. That is all the responsibility of the TCPIP.

Application level security is handled by the APP. If it needs a 3-level key exchange, then it does the key exchange reading and writing from that stream. If it needs to perform a multi-layered lossless encryption understood by the receiving app before sending it through the communications channel, so be it.

One should not really mention either REST or JASON when discussing security. Within two years REST will be both a memory and a banned practice10. The land of anarchy and 12 year old boys cannot adhere to an enforceable standard.

Even though the perfect OOP exists for networking, and it does in the Qt networking and QIODevice based classes, saddling a non-embedded application with that responsibility is an architectural crime against humanity. It also makes it physically impossible to verify system security. This is the primary reason so many *nix and Windows based systems are constantly breached. Now you don’t have one software appliance through which everything must get, you have 5000 programs, most of which coded by the lowest cost off-shore labor one could find, all with gaping bugs and security holes.

The 12 year old boys all code for their one PC. Even the Windows developers aren’t any better. None of them ever grasped file versioning or what is required to play at the midrange and up level. Exposing customer data to breaches on a program by program basis is the horrific idea put forth by these 12 year old boys. There is no singular point where you can shut it down or control it.

RSYSLOG, currently the most popular Linux system log, is a great example of this tragedy. The “default” configuration is not to accept messages from remote systems. Your TOTAL control over this is (on Debian based systems) in /etc/rsyslog.conf

#################
#### MODULES ####
#################

module(load="imuxsock") # provides support for local system logging
module(load="imklog")   # provides kernel logging support
#module(load="immark")  # provides --MARK-- message capability

That’s it. 2 lines for TCP and 2 other lines for UDP. Now “allowed networks” “allowed hosts” or anything else. Even if they _had_ provided something in the configuration, you would still have a viciously insecure system people were running around calling secure. Someone would have to find each and every config file, no matter what it was called or where it was stored, to determine what is getting in from where. A physical impossibility to maintain.

Ubuntu tried to address this issue with an ill-fated release where they shipped UFW, unanounced and enabled. Nothing worked. The Ubuntu Fire Wall blocked everything. Mass outrage. Only people with another system that could actually reach the Internet found the message about how to disable the firewall.

Midrange and higher class systems need a manageable, full tested appliance through which all things go.

TCPIP> show service/full syslogtcp
 
Service: SYSLOGTCP
                           State:     Enabled
Port:              601     Protocol:  TCP             Address:  0.0.0.0
Inactivity:          0     User_name: UCX_SYSLOGD     Process:  SYSLOGTCP
Limit:              12     Active:        2           Peak:         4
 
File:         DEV_DSK:[UCX_SYSLOGD]SYSLOGTCP_STARTUP.COM
Flags:        None
 
Socket Opts:  Rcheck Scheck
 Receive:            0     Send:               0
 
Log Opts:     Acpt Actv Dactv Conn Error Exit Logi Logo Mdfy Rjct TimO Addr
 File:        DEV_DSK:[UCX_SYSLOGD]SYSLOGTCP.LOG
 
Security
 Reject msg:  not defined
 Accept host: 0.0.0.0
 Accept netw: 192.168.1.0:255.255.255.0

There is only one place to look to find what network and host can reach what service. There are flags and logging and all kinds of other things to help with security. None of that stuff exists in *nix. I ass-u-me none of that stuff exists in Windows, but wouldn’t know.

Professionals don’t use Microsoft products.

A simple stream/file based API exists between the program on VMS and the TCPIP Software Appliance on VMS.

The systems manager configures whatever he/she needs to configure for services, allowed networks, ports, flags and protocol level security method of the week. They can change protocol security method of the week every other day if they wish. The App doesn’t care. If the service decides to allow insecure TLS methods 1, 2 and current, then they enable all three on the service definition and the TCPIP Software Appliance uses the various plug-ins to communicate accordingly to each connection.

The outside world runs whatever the Hell the outside world runs, security and integrity be damned.

This is far more secure than anything I’ve heard talked about before. Made even more secure by the fact your programs on the back side can run as regular ordinary users without need for the privs of God.

Here’s a really great thing. Go pull down the Freeware SYSLOGD code. The only reason it runs is because it runs with the privs of GOD. It passes hard coded quoted text strings into routines which thump new values into them. An ordinary user gets an access violation extraordinaire. Running under do-anything-you-want you don’t even get a ripple.

Personally I’m used to such software appliances. MQSeries, mqtt, that COOA object message queuing thing whose name I don’t remember and many others. All software appliances. How they do what they do, the APP doesn’t care. We do an OPEN. We do a PUT/Write or a Read if either ReadyRead or ReadyWrite is set. We close when the app has decided to stop talking.

MQSeries has been on VMS for many years now. Mqtt and COOA were all on various *nix flavors. Why? Because people realize that *nix did it wrong and propagating really bad sh*t isn’t going to move an industry forward. Even creators of Web pages and Web services are starting to use mqtt11.

Why?

Because *nix did it wrong.

Yes, the GDPR, first of many such laws to come, when fully enforced, will necessitate the Keller MBA mentality of using the cheapest piece of shit off a Walmart shelf to run your company then have it operated by the lowest wage worker found anywhere on the planet change to focus, once again, on quality products operated by skilled labor.

In the 1970s and 1980s businesses believed, and rightly so, their software systems provided a competitive advantage in the marketplace. Their custom written systems allowed them to conduct business in ways competitors could not.

Then we had the rise of the worthless cookie cutter (notably Keller) MBAs. They weren’t going to start off in the mail room and learn what a company actually did, THEY WERE MANAGEMENT! This necessitated every company being the same, otherwise these MBAs would be, justifiably, unemployed. Thus came the rise of OTS and totally untested “Turn the Knob” software in an effort to make every company be the same so that the output of the MBA mills could find employment. MBAs from Keller are the management equivalent of H1-B workers for those not from America.

This race to the bottom started in the 1990s and has continued to this day. Filling data centers with worthless x86 computers running free/low cost operating systems with known 8-lane wide security holes hoping to avoid prison when the big breach happens.

A firewall couldn’t protect Equifax from the Keller quality MBAs running the company.

The TCP/IP and Socket libraries must be purged from any new OS release. A TCP/IP Software Appliance which provides a stream/file level interface, removing all port creation and transport layer security is the only way forward.

It’s up or out in IT, but many, like WANG Computer of days gone bye, are clinging to an obsolete one trick pony.

So – You Think Commodity Hardware and a Free OS are Business Worthy

We may soon, for the first time, see executives go to prison for this ludicrous decision and the actions they took after.

If you haven’t heard of the Equifax Inc. (EFX) data breach you haven’t turned on either a radio or a television or went to an actual news site with your browser. Business after Business has been trying to skate on their fiduciary responsibilities by relying on “free stuff” which cannot be made secure in stead of relying on robust proprietary operating systems and the proprietary hardware they run on.

Equifax is just another in a long line of companies which don’t give a rats behind about their customers. There had to be a Keller MBA involved in creating the spreadsheet which “justified” the move to “free stuff.” It’s easy, you just leave off all of the expenses which would negate it.

Most companies these days have bee purchasing some form of insurance policy for breaches rather than performing their fiduciary responsibility of using high quality systems to safeguard their customer data. Most have been replacing skilled American IT workers with H1-B and vacation visa workers of much lower skill.

A real IT architect knows that you air gap this shit. You set up a sacrificial Web server outside of everything and route data only messages back through something like Websphere or your own message mapper. That mapper converts the XML or other free format message into a fixed field width proprietary message and only that gets back to a real back end. The back end responds with a fixed field width proprietary message which the message mapper turns into whatever “open standard” you are supporting via your Web interface.

You never directly connect a Web anything to a database or a real computer.

Insurance policies tend to be backed by various re-insurance schemes and financial instruments much like those mortgage backed bonds Wall Street fraudulently sold creating a global recession. There cannot be enough in the slush fund to cover up to 1/3 of Americans becoming victims of identity theft. Congress cannot allow this company to skate by with only a few months of credit monitoring for each impacted customer. There has to be actual damages and prison time.

We are now standing at the precipice of another financial collapse at least in the re-insurance market covering companies with idiot executives using low cost systems and labor allowing massive breaches to happen. Insurance pools tend to be based on a small percentage of pool members having claims. There can be no pool large enough to cover 1/3 of Americans all at once.

Welcome to the new market crash.