Posted inInformation Technology

Design Patterns and Real Computers

I received this question from a consultant I’ve worked with in the past.

Finally, a question about the gigs you get back there.   Do you run into software architects that are really big on using design patterns?

No and for good reason. Any time the contract posting mentions design patters I simply do not apply.

You know, I bought that Design Patterns Qt book and never read it. I have never read any of my Qt books. I have spent many hundreds of dollars on them. I always flip to the one section I need, steal what I need then get on with my life. The last thing I want to do is get involved with someone chanting in a meditation smock trying to become one with the pattern.

The sweet spot for me with little computer applications is:

    Put a control on the screen. When user interacts control sends message somewhere (serial, TCP/IP, message queue, doesn’t matter) when some other message comes back in validate, process, make application do something.

Why? Because that’s how real computers with real operating systems do it.

When you work on real computers with real operating systems, design patterns do not happen. Why? Because everything feeds a database. The DBAs organize the tables based on what you say you will do with the data and something called “third normal form” as well as a bunch of other DBA type rules. They dynamically create/drop/whatever indexes and views to improve performance. Doesn’t matter if you are working with COBOL, FORTRAN, BASIC, C++, or RPG. Your program is either accepting input and writing a record to the database, or it is selecting records from the database which match accepted input criteria.

Design patterns is the realm of wanna-be computers. This is where you have dufuses trying to do everything in RAM with massive data structures. Generally it is a shit design to start with, but, the “architects” in that world have zero transactional integrity training and less than zero concept when it comes to a Two-Phase-Commit. If they use any “database” they use SQLite which doesn’t even provide data integrity. You can store any data type in any column in SQLite, it doesn’t care what the schema says. I know, I’ve used it often in embedded and semi-embedded projects.

Such things aren’t allowed in the real world of data processing. Data integrity, transactional integrity and referential integrity are all paramount. When you swipe your credit card to purchase gas, there is no “keep it in RAM and hope the power doesn’t fail” in the design of that system.

We are about to see a major upheaval in the embedded system world using wanna-be processors. I haven’t tried the Raspberry Pi 3 yet, but the Pi 2 is pretty close to eliminating those shit architectures. If you use a “standard” SD card the “disk” IO is rather painful. If you put in a Class 10 SD card there is a night and day difference with IO speed and boot time. I suspect Pi 4 or Pi 5 will raise the price slightly and juice the components for the SD card slot. I also suspect the product will be getting USB-3 instead of regular USB-2.

The Raspberry Pi isn’t just a prototyping board. Many companies are using them in production products. Many more will. With a $35 retail price (much lower buying bare boards in bulk from manufacturer) you simply cannot justify a custom board for your application. All of the user interface and data storage functions are being designed into software on the Pi and custom boards are relegated to custom device interfaces via Parallel IO relay control or a serial interface.

Yes, we’ve all studied data structures and dabbled with design patterns. There is also a blurring of definitions. Back in the 80s when you studied data structures you covered queues, linked lists, iterators, etc. and they were covered in a class called Data Structures. When you studied COBOL (or any 3GL) on a real computer with a real OS, record locking was at the OS (or OS subsystem) level. Yes, COBOL had an APPLY LOCK HOLDING clause which let you manually lock and unlock things, but, you generally got into trouble if some other application also needed access to that indexed file at the same time.

On real computers with real operating systems you design your solution based on the tools your configuration offers. If you needed guarantied delivery of all messages you used MQ Series, Tibco or one of the other message queuing systems which use some form of journalled file to ensure delivery. If you need to ensure all messages are processed once they are delivered you use ACMS or CICS to endure any message which fails gets retried N times before being routed to an error queue where your bad message handling server routes it for manual intervention. If you actually get any of these 1 of 3 things has happened:

  1. you have completely consumed a system resource like disk or you no longer have enough memory allocated to the account to let another process run.
  2. A front end feeder system, typically a Web page, was written by the lowest bidder, not a qualified developer.
  3. The DBA took the database off-line at a most in-opportune moment for some kind of maintenance without telling anyone to stop message processing first.

On a real computer with a real operating system you don’t write some monolithic program which creates a zillion threads, you put the code which would have been in those threads into little server instances run under ACMS or CICS and your application simply queues the tasks. You let the system managers set min and max server counts and spread them across any node in the cluster.

On a real computer with a real operating system you _never_ let a Web application connect directly to a database NO MATTER WHAT. Instead, you put services in place, preferably with fixed proprietary messages. What? No XML? No. XML is for _external_ communication. It is never for _internal_ communication. Why? 99.9999% of all non-password attacks are prevented this way. All of those buffer overrun and SQL injection techniques don’t work on such a system. For those things to work the Web application must be directly connected to the database. They try to send 80,000 characters to pop past the end of some buffer, fine. Your proprietary internal message will take the first N bytes, that’s it. The rest disappears in the ether.

Yes, I’m sure you all heard proprietary was bad. That was basically started by people trying to sell cheap systems because proprietary wasn’t cheap. The wanna-be computer platforms are insecure. You cannot harden them without throwing them out and starting over using bullet proof proprietary systems as the template. When they started passing things by pointers expecting certain ending characters is when the wheels came off the cart. The problems are far too engrained to ever be rooted completely out. Now that Open Source has become all the rage far too much of it has been ported to these systems. Now, once completely impenetrable (unless you stole someone’s password) platforms have 8 lane wide security holes just like the wanna be systems because it is the same software.

Yes, I do quite a bit of work with Linux and Qt on embedded platforms. Yes, I use it as my desktop because it is better than Windows and doesn’t have all of the licensing sh*t to deal with. I would never try to run a company or conduct financial transactions with it nor would I use Windows for either of those purposes. The one thing which has always amazed me with the reports of these massive identity theft breaches is the reporters never bothering to identify which operating system was penetrated. Tying an operating system to headlines like “Largest Identity Theft to Date” would begin to change some bad habits.

No, not every platform is insecure.

July 2001—OpenVMS deemed unhackable
OpenVMS was declared “unhackable” at DEFCON 9 after an OpenVMS Web server was set up at this self-proclaimed underground convention for “hackers” and enthusiasts. Allegedly, the OpenVMS operators were “told never to return” because trying to hack the OpenVMS operating system was too frustrating.

No, you cannot “bolt on” security to a platform with core architectural flaws and “harden” it. Yes, you can plug a few holes, but it is a bit like trying to make a screen door actually hold water using a can of that spray stuff advertised on TV.

No, proprietary wasn’t bad. It was expensive and the vendors had bad business practices trying to lock each other out of markets forcing competition out of business. Well, in truth that part mostly worked. There have to be hundreds of midrange and larger computer manufacturers which aren’t around anymore. Most of you reading this probably don’t know that Singer made a computer.  There was also the MAI BasicFour universe. Oh, most of you probably remember Wang, but there were so many more during the 70s. The “being able to exchange data” has _mostly_ been learned. Yes IBM still has EBCDIC  and the rest of the world uses ASCII/UNICODE  but the interoperability lesson has been learned.

Very few developers can make the jump from wanna be computers to enterprise level systems. For most enterprise level developers moving to a wanna be platform is like buying a new car and having it arrive as a crate filled with parts and assembly instructions.

Roland Hughes started his IT career in the early 1980s. He quickly became a consultant and president of Logikal Solutions, a software consulting firm specializing in OpenVMS application and C++/Qt touchscreen/embedded Linux development. Early in his career he became involved in what is now called cross platform development. Given the dearth of useful books on the subject he ventured into the world of professional author in 1995 writing the first of the "Zinc It!" book series for John Gordon Burke Publisher, Inc.

A decade later he released a massive (nearly 800 pages) tome "The Minimum You Need to Know to Be an OpenVMS Application Developer" which tried to encapsulate the essential skills gained over what was nearly a 20 year career at that point. From there "The Minimum You Need to Know" book series was born.

Three years later he wrote his first novel "Infinite Exposure" which got much notice from people involved in the banking and financial security worlds. Some of the attacks predicted in that book have since come to pass. While it was not originally intended to be a trilogy, it became the first book of "The Earth That Was" trilogy:
Infinite Exposure
Lesedi - The Greatest Lie Ever Told
John Smith - Last Known Survivor of the Microsoft Wars

When he is not consulting Roland Hughes posts about technology and sometimes politics on his blog. He also has regularly scheduled Sunday posts appearing on the Interesting Authors blog.