How Far We’ve Come – Pt. 15

I need to continue yesterday’s discussion about good architectural design a bit. Particularly the part about not shoving everything into RAM. I see two different kinds of C++ developers on Qt projects.

  • Those who want to be one with the OOP. Every piece of data, including an integer, should be an object derived from the most holy object to end world hunger and bring world peace. To them you should be pushing the envelop going beyond C++11 and 14 standards into the brave new world of “proposed” standards changes for each and every project. If there is any change to the requirements or the language specification the complete OOP project will have to be rewritten from scratch. Just the way it is.
  • Those who have worked other places on other platforms with other languages and tools. While they love Qt, they treat C++, for the most part, as a better C. They don’t use the shiny new features unless the situation _really_ requires it. You won’t find them writing lambdas which return values as targets for signals. In fact, you won’t find them adding lambdas. They know full well this project/application will _never_ be rewritten. Some fresh out of college or fresh off the boat kids with zero real world experience will be tasked with maintaining it for years. A few will go from new-hire to retirement party taking care of this one thing. (Don’t believe that last part? Take a look at just how long the core payroll system has been in place at long time Fortune 500 companies such as Sears, GM or AT&T. If not payroll, general ledger or some other core accounting system.)

Most of those coming from a “one with the OOP” point of view never learned to use a relational database. They were only taught object oriented programming. I was shocked on a recent project when one from this world agreed we should remove all of the lambdas from the code base. He actually built up a really compelling reason (besides the fact they were roughly 10% of all system crashes) which I simply named “There Be Dragons.” I don’t remember all of the details or his drawing, primarily because he didn’t need to convince me, but it was a good one. When you have countless threads and timers running, especially with lambdas which return a value, you can attempt to return to an object which has already been deleted. There were several other scenarios which can also cause crashes. Ah, yes, here is one.

It is possible to be a bit lazy and capture all local variables. To capture them all by reference, use [&]. To capture them all by copy, use [=]. We do not recommend doing so however, because it makes it too easy to reference variables whose life-cycle would be shorter than the life-cycle of your lambda, leading to odd crashes, even capturing by copy can cause such crashes if you copy a pointer. Explicitly listing the variables you depend on makes it easier to avoid this kind of traps. If you want to learn more about this trap, have a look at item 31 of “Effective Modern C++”.

Does this mean I’m completely against the lambda thing? Nope. There are times when you have stand alone functions needed by many objects which simply make more sense as lamdbas. You can’t always encapsulate them in an object, especially when they come in something like a third party C library. Yes, you _could_ create a wrapper object, then create an instance of it and then connect it to a signal, but when you look at your Doxygen generated application diagram you are going to see this massive puddle of spaghetti balanced on one object.

Those of us who came from the era where 2400′ reels of magnetic tape which had to be mounted by a computer operator to be used in conjunction with a few very expensive disk drives, then migrated to cheaper disk drives and OS provided indexed file systems and finally to relational databases, have a different view of the world.

The database is the center of the universe.

You store everything there because you aren’t the only one who is going to need it. Eventually that integer value someone wanted to turn into an object will get pulled into a data warehouse where years from now reports nobody thought of today will require it.

Fitbit and Medtronic image

Fitbit and Medtronic

Oh, I’m creating an embedded system for my Pi/Android/custom device, I don’t need to worry about that. Guess again! You are creating a feeder system. You are creating a feeder system. View it as a sensor or data collector because that is what it is. At some point it will unload its data either to a bigger database on the local network or on a cloud. Even a Fitbit syncs data somewhere.

Were someone productionizing our little lotto tracker system the draw_stats table would be a permanent table. There would be another tracking table which would keep a timestamp for the last time drawing data had been added and the last time the draw_stats had been generated. At the start of any report which needed it the these timestamps would be compared and the draw_stats table would only be generated when it was stale. Once it had been generated there would also be a database trigger fired. That trigger would either schedule or immediately launch a batch job which pushed the new data out to a data warehouse for back-in-time reporting.

Sorry, forgot, many reading this will be Millenials. You have no idea what a data warehouse is. The closest Web based example I can give you is the Internet Archive Wayback Machine. You can pick a Web site they are archiving and go back in time for however many years it has been archived to find what it looked like and the content it contained. Keep this in mind before you join in an embarrassing flame/slur war with someone anywhere on-line. If the place is archived now or in the future on the Wayback machine, your bad behavior will be there for all to see until the archive project looses all funding.

The whole data warehouse world fumbled around for quite a while until both businesses and the software grasped the wayback concept. For retail operations it lets you go back many years to look for “seasonal trends” so you can more accurately adjust your product mix. For medical records and personal health history it allows people to look back and see that 10 years ago this month they road this bike/whatever with the exact same settings and their heart rate was X, today it is Y so quit smoking and lay off the fried food.

We are quickly approaching a time where FDA regulated Fitbit like products will not only be everywhere but be mandated by most insurance companies, especially if you have some long term/chronic health issue. They will periodically upload the collected information via your Linux based home computer to your various doctor offices and, most likely, your insurance company. Privacy won’t enter into it. You’ll be faced with paying north of $4K/month for health insurance without one and under $400 with one. For those who doubt such a day less than 5 years out, take a look at the Progressive Snapshot and now all of the other companies coming out with something similar. Conceptually, it is doing the exact same thing, only it is recording your driving habits. If you have an integrated nav system with Garmin level speed limit information, it could also know you were doing 45 in a 20 last Thursday.

Originally such devices were just for high end cars capable of 150 MPH or more. Given all of the tech integration on vehicles, they have access to way more if they choose to use it.