Posted inInformation Technology

Some Thoughts on MVVM – Part 1

There is currently a lot of chatter in the IT world involving MVVM (Model-View-ViewModel). While there are merits, there are also drawbacks. One of the biggest drawbacks I’ve noticed is in the tutorials and documentation found on-line. Take a quick look at this article. Below is a quote from it.

The key to remember with the model is that it holds the information, but not behaviors or services that manipulate the information. It is not responsible for formatting text to look pretty on the screen, or fetching a list of items from a remote server (in fact, in that list, each item would most likely be a model of its own). Business logic is typically kept separate from the model, and encapsulated in other classes that act on the model. This is not always true: for example, some models may contain validation.

 

I made the last two sentences bold because they are what triggered this post. Much of what you find about MVVM comes from a PC or tiny system background. It views all data as an object and believes business logic should be placed in other classes. The writers of such things have never worked in an enterprise level environment.

Enterprise level applications all started out using some form of indexed file system. RMS, ISAM, VSAM were the predominant file types of the day during the late 1980s to mid 1990s. These were all we had. All business logic had to be coded within the application. This pushed systems analysts to force developers to create modular programs. Many of you have heard the phrases before:

  • source/copy libraries
  • external subroutines
  • site specific object libraries
  • company specific reference manuals

External source libraries started rather early on. COBOL provided the Copy Lib concept where you could pull in source code from external directories and text libraries. BASIC and other languages followed suit with various forms of %INCLUDE. While you _could_ include source anywhere in these modules, with BASIC, FORTRAN and a few other languages where variable declarations weren’t required you had to worry about variable collision. If you used a variable, say X to store a value you needed and the included source used X as a local scratch variable, interesting bugs occurred.

COBOL was a bit safer because each Copy Lib containing source paragraphs had a corresponding Copy Lib for the WORKING-STORAGE section. The compiler would alert you to a duplicate declaration, usually. You could still have the same variable name occurring at different levels in both the I-O SECTION and WORKING-STORAGE. If you tried to use X without saying X IN PAYROLL-RECORD or X-MY-SCRATCH-VARIABLES the compiler would usually flag it so you knew in advance you had a problem.

External subroutines came about rather quickly. When programming in BASIC with a version which required each line to have a unique line number and a language imposed line number range of 1-32767 you simply couldn’t fit all of the required logic into a single source file. Eventually BASIC standards changed to the point you needed only the first and last line number and finally the line number requirement was pretty much removed. Even so we had compiler limitations. More than once I encountered a source file which exceeded the compiler input capacity. For languages which did not require variable declarations, external subroutines also got around the collision problems. Each individually compiled routine had its own variable scope. Any values it needed from the caller were passed in.

After a few years, maintaining all of these external subroutines and functions within build procedures for applications became a real hassle. We started creating site specific object libraries so if some compilation parameter or such needed to change because of a change to a specific routine, we didn’t have to go update the build procedures for hundreds or thousands of other applications.

It shouldn’t take a big leap of faith to believe all of that site/company specific business logic became really hard to keep track of. It should be an even smaller leap of faith to realize many site specific libraries had multiple routines which did the exact same thing or quite close to it. This lead to semi-official librarian positions. One or more persons in charge of maintaining the site specific libraries and updating a company specific reference manual. Various code review steps were put in place to ensure programmers weren’t spending their time re-inventing the wheel.

All of this evolution is why companies became single language shops. COBOL, BASIC, FORTRAN or whatever the first major application was written in became the language of the shop. Once you started creating your business logic in a language and incorporating it into various libraries you wanted to keep it all in one place. OpenVMS had the OpenVMS calling standard. If you adhered to the calling standard when declaring parameters, you could, quite literally, have your business logic library written in many programming languages and use them from many others. Other platforms strictly enforced language specific calling standards making it difficult, if not impossible, to call a COBOL module from FORTRAN or vice versa.

Shops started needing more languages. The Internet came along after all. Now we needed Web pages and Web services to also incorporate 20+ years of business logic nobody remembered. The first major attempt at solving this problem was Service Oriented Architecture. You may have heard the term “Data Silos” bandied about. By and large this meant each division of each company had its own set of data and applications containing the business logic. Sadly, data entry screens contained a large portion of the business logic. Applications which didn’t use forms managers like DECForms and CICS tended to code portions of the business logic directly in the form rather than making each field validation an external subroutine in the company library.

Even if programmers tried some of the validation was provided by the forms packages themselves. Few thought about writing external date/numeric/integer/limited-character-string validation routines because they were so easily enabled in the forms managers. I’ve been doing a lot of Qt application development for embedded systems over the past years and most of the applications I see/work on have these same validations being provided by the Qt library. In the defense of these systems, they have almost all been one-off embedded systems having touch screens. The data they collect gets sent off to back end services which also have to provide their own validation because they receive data from many different sources.

The correct place to validate data is the last step/process which actually writes it to the database. Once it gets into the database it is too late.

It should not come as a surprise to anyone that, first Java, then many other Web language developers ended up recreating various forms of the same business logic which had been incorporated into the original “green screens.” For Java, at least initially this meant they had to roll their own validators. Later, various libraries such as SWING provided some rudimentary validation routines.

The major fly in the ointment was and still is date validation. You cannot validate a date string until you know the format. You cannot convert a date string to some internal binary representation if it is invalid. True, you can write a routine which accepts both the string and a format string such as:

"01-01-2017", "MM-DD-YYYY"

But you cannot write a routine which just validates the date string itself without the format because 01-01 could be MM-DD or DD-MM.

 

Roland Hughes started his IT career in the early 1980s. He quickly became a consultant and president of Logikal Solutions, a software consulting firm specializing in OpenVMS application and C++/Qt touchscreen/embedded Linux development. Early in his career he became involved in what is now called cross platform development. Given the dearth of useful books on the subject he ventured into the world of professional author in 1995 writing the first of the "Zinc It!" book series for John Gordon Burke Publisher, Inc.

A decade later he released a massive (nearly 800 pages) tome "The Minimum You Need to Know to Be an OpenVMS Application Developer" which tried to encapsulate the essential skills gained over what was nearly a 20 year career at that point. From there "The Minimum You Need to Know" book series was born.

Three years later he wrote his first novel "Infinite Exposure" which got much notice from people involved in the banking and financial security worlds. Some of the attacks predicted in that book have since come to pass. While it was not originally intended to be a trilogy, it became the first book of "The Earth That Was" trilogy:
Infinite Exposure
Lesedi - The Greatest Lie Ever Told
John Smith - Last Known Survivor of the Microsoft Wars

When he is not consulting Roland Hughes posts about technology and sometimes politics on his blog. He also has regularly scheduled Sunday posts appearing on the Interesting Authors blog.