Forum

Please or Register to create posts and topics.

What should be in the distro?

First let me apologize in advance, but to protect the integrity of the site all forum posts have to be approved. If I don't do that identity theft, kiddie porn, drug sales, and Donald Trump all over the site. Fozy Bear needs to find someplace else to play.

I should also probably apologize to the individual I responded to this morning. Bring up the cost of a book to an author is like pouring a container of salt into a fresh gunshot wound.

Morton Salt

For those who don't read the other blogs I write on, there is an epidemic of book piracy in India. Not long after I published my $90 OpenVMS book I heard from multiple Indian recruiters that it was one of the most pirated books in India. You could buy it on the street for $5 and then get a "good job." This was, of course, during the peak of off-shoring before back-shoring became real. They didn't even pay to print it because American companies installed copy machines and book binders in their campuses.

We also have to deal with the myth book piracy is good for authors. Let us not forget for-profit book piracy, especially Google Books.

Some time this year I will be removing my EPUB titles from all stores. I will not turn any future title into an ebook. After spending around $50K getting all titles converted because so many non-U.S. buyers complained about shipping they sold about one EPUB copy each . . . then all sales of all forms stopped. The EPUBs are available on piracy sites all across the Internet and Dark Web. At least with a physical print book they have to take the time to scan each page. Yes. This is a very touchy subject for every author. After Lightning Source's recent round of price increases, it now costs most authors more to print a $35 IT book that they can get wholesale after the mandatory 45% discount.

When it comes to geek books, everyone wants to read them and they want to read them for free. That doesn't help authors eat and live indoors.

====

This first message assumes you have already read Part 1 and Part 2 of the blog post series.

Since everyone was messaging me directly instead of commenting I thought I would get a forum up and running so hopefully we could all discuss it there.

====

How We've Been Muddling Through

Offline Repo

Some of you have suggested the offline repo options. Yes, for some projects this gets done. It doesn't stop the fool though (Part 1). Everything is in the repo unless someone takes a lot of time deleting things from it. You also have to ensure to either document 100% what was removed or make an image of your repo for the external QA people. If someone installs "something they like" or worse yet "an editor they are more productive in" you are headed for 510K failure. You haven't written a line of code yet but you've already failed. It will take many months and a few million dollars before you find out.

Offline repos only work if 100% of your team are highly skilled professionals. If even one member was "priced right" you fail with this.

Everybody uses the same editor/IDE no matter the bitching. As a consultant, I charge by the hour. If you want to pay me for the 40-120 hours it will take to get "good enough" with your chosen editor/IDE, fine. The meter is running. Employees who whine and snivel about wanting to use their "more productive" IDE are suddenly "allowed to seek opportunities elsewhere." They have to be canned because right now, before you have any portion of your device complete, they have doomed it to failure.

VM Image

The only reliable method I have found is to have one person create a Linux VM with all of the proper tools, libraries, etc. Delete the viruses like unattended-upgrades, etc. constantly searching for updates. Disable the Internet/network for the VM, then export the VM to a backup image. After that you write very detailed instructions on how to import the image. You mandate they verify network access has been disabled in the VM prior to doing any development.

Ideally, if the project is going to be very expensive, the same person also creates a script that runs in cron. Each night (or whenever) it verifies the date, timestamp, and size, of all critical files, tossing up a big alarm dialog when something isn't correct.

You have to export the VM because "just copying the files in the directory" will screw you down the road. Today's Oracle VirtualBox can generally import a VM from a 10+ year old version but if you try to "just copy the files" in you are going to be screwed. Oracle keeps changing how it stores stuff. Yes, you can grow old waiting for a 250GB VM to export (need at least that size for Yocto builds) but 10 years from now you will still be able to use it.

The determined fool can still screw you here. They turn on network access, install their "more productive" editor/IDE/whatever, then turn it off thinking you won't notice. That's why you need the cron job.

Docker Build Containers

This is a variation of the above two. Most SOM/SOC vendors are pushing you to do this because they "think it is safe." Docker containers can still be impacted by what is installed on/in the host. Never forget you create local directories for much of that work. It most definitely is better than poke-and-hope like most device places are doing, but not by a lot.

====

Fundamental Problem:

  1. The three decade time frame.
  2. Fools are "priced right" so they will be on the project.
  3. Your instructions are no good
  4. Management can't be trusted

Your instructions are no good

The documentation set you have to file, thanks to the War Powers Act, must allow any business entity to start with nothing and in a limited amount of time, spin up full development and production. Everybody saw General Motors making respirators during the COVID-19 Pandemic. Not what GM typically makes, but, because of what we have to go through and supply, they were making lots of them in roughly a month.

I have been handed "developer documentation" for projects with click by click step by step instructions on how to install Ubuntu 14.04 and configure it for development. Yes, I managed to muddle through it. Already some things had changed by the time I got the documentation. I guarantee you those instructions are worthless today. Ubuntu 14.04 hit End of Live in 2019. Now you have to configure your dev environment based on what I had to do for Ubuntu 10.04 here.

Have I written such instructions? Yes. They always had a title/description of "How the Dev VM Was Initially Created." In the rest of the documentation set you will find "How to Export the Development VM" and "How to Import the Development VM."

Fools are "Priced Right"

I'm not being racist here though it might sound like it. The visa workers "consulting firms" are shopping around would be limited to stocking shelves at Walmart and waiting tables if they were actually in America. When I was working on the IP Ghoster project the Big Kahuna brought in about 45 of them (I lost count). One made it two weeks. The rest didn't last a week. Some were gone in a day. On paper the resume said they had 10+ years of experience doing exactly what we were doing and they were "Priced Right." Truth is they had none. Someone who actually had ten years answered all of the interview questions for them.

If all you are making is phone apps or you are Facebook, Google, Apple, or Microsoft, I can understand you hiring people with zero skills who are "priced right" because nothing you make matters! We are talking about medical devices here! A recall like this one happens when you are using Agile and "priced right" developers. How the company hasn't been banned from manufacturing any future medical device after those deaths is beyond me.

"Priced right" labor believes sites like LeetCode teach good programming. No, they do not. That's how you get a body count like the one in that recall. It's also how you end up with people installing whatever they want and thinking it is okay. Any developer can be tired and write bad code. I've done it without being tired. These people not only write bad code they write unmaintainable code. They are also the people who want to chat endlessly about Design Patterns. They also thing Template meta-programming is a good idea in a medical device.

Many medical device companies are now either mandating desktop computers that stay in the office off any network that can reach outside mostly because of people who are "priced right." Other companies are having firmware/drivers added so that if you plug any kind of USB storage device into your laptop it will shut down and refuse to boot until you take it to security.

Why?

If it exists they will install it.

Eventually they install something that causes you to fail the 510K approval process.

Every medical device project must have a Systems Architect and an Application Architect running rough-shod over developers making certain they aren't trying to "solve a problem" today in a manner that causes the entire project to fail tomorrow.

I still have no idea how a catastrophic problem (patient deaths are catastrophic) could actually get out to product in a valid medical device development environment.

The three decade time frame

You will find FDA regulations change over time. Up until two years ago I was getting a phone call or email from Harman asking me to work on a medical device using OS/2 and Qt 3.x. Never took the gig because they wanted to pay less than one third of market rate. I bring this up because most of you have never heard of OS/2. IBM may have sunset the product, but there is still a company supporting not-small-named customers that are still using it. I used to know what the medical device was, but those brain cells seem to have died.

Three decades is rather short-lived for a medical device. For many decades any medical device that had a "field service screen" tended to have exactly one hard coded "field service" password. A person who had been a repair tech or just went through the training could walk into any medical facility anywhere and get into the field service screen for any device from that manufacturer.

People weren't malicious assholes until Trump got into office. This was known and ignored. Then the FDA came out with a new regulation where each facility must be able to set their own "field service" password.

Anyone who has been through the 510K process knows that a "minor enhancement" approval is way faster/cheaper than a full 510K with clinical trial and everything. Your core development environment still has to remain the same as the original in most cases. You still have to pass the file date/version/size test for core files.

If you can't then you either go through a full 510K process or pull the product from market at the time the regulation goes into effect.

So, you dust off your click-by-click step-by-step instructions for setting up the Ubuntu 10.04 dev environment only it is 2022 now . . .

Management can't be trusted

I have a great relationship with most of the managers I have worked for, but in general, corporate management cannot be trusted. Almost every work around to this problem people suggest need to take into account this scenario.

One of the publicly traded corporations I worked with multiple times over two decades kept hiring Keller MBAs. Some/most of the upper mucky-mucks had come from Keller so the IQ pool wasn't that deep to begin with. One thing I have learned is that Keller MBAs in particular believe they can manage any company without learning what it does. I kid you not. They assume it is one of the token cookie cutter models covered in school and march on.

This particular type of management chants "cut costs" in meetings so the room sound like chickens clucking any time you are in it with them. One of the brilliant cost cutting measures they came up with was to purge any file that hadn't been accessed in over two years. This brilliant idea came roughly a decade and a half after scanning all paper systems documentation to store digitally was deemed a bright idea to "cut costs." They could stop leasing lots of office and storage space that way.

Many of the core business systems had been written in the 1970s to early 1980s for this company. A big chunk of the employees were on the pension system they had been with the company so long.

Right after the purge to free up disk space they decided to force retire all of those expensive older workers and replace them with "priced right" people. The new hires went to see how things actually worked so they could figure out how to do their jobs . . .

I kid you not.

I have seen the above play out at pretty much every type of corporation, especially those that are publicly traded. You either have to have an external entity that will still be here thirty plus years from now store all your stuff or you have to have a system which can be burned to a single DVD. Perhaps a few DVD. Then the DVD have to be copied and stored in multiple locations around the country.

I kid you not, I have had management tell me RAID-10 is good enough, we don't need backups.

We aren't talking about disposable razors here, we are talking about complex medical devices. Once purchased by a facility they will be used until they die or are pulled from the market.

Given that telling users to "build in a Docker container" is about as safe and effective as wearing something red to avoid Bubonic plauge, we need a real distro solution. This distro must minimize the damage and unskilled/non-professional developer can do to the FDA 510K development process.

Let us present this as the current list:

  • Core OS with terminal and packages required for building Debian packages as well as apt-get for installation and required files for linking NVIDA drivers.
  • Basic network, keyboard, mouse drivers.
  • The MATE desktop because it is one of the lightest without any Qt components. This will be stripped down, not having the flood of applications most distros stuff into it.
  • Guake or another drop down terminal.
  • These application framework libraries with full source having the source in directories the default user can build from so all build dependencies will be installed.

NanoGui

Elements

LVGL

wxWidgets

CopperSpice

  • Only X11 - We don't need Wayland
  • PostgreSQL with full client dev packages
  • SQLite3
  • Both Jed and Nano for editing in terminal with Jed being the default. Jed will also have EDT navigation enabled by default.
  • Docker  -- kind of uncertain on this one
  • ARM cross compilation packages - at least some
  • Flatpak
  • RedDiamond for the default GUI editor

The repo would only have

  1. "additional" ARM cross compilation packages, assuming we don't install all of them.
  2. NVIDIA drivers that have actually been tested with hardware.

Users would have to install everything else via Flatpak or a local .deb. While the local .deb could trash things, facilities can dramatically reduce this via USB storage blocking firmware and removing these systems from any network that can reach the outside world.

Installation would prompt for local repo locations to override default. Will also prompt for Flathub repo location so company can have its own.

 

Here's the current question:

Are any 32-bit ARM SOC/SOM being used for new development?

Don't care about Apple or phone products. Talking medical devices here.

Thoughts people?

I created a forum so you could chime in instead of emailing me all the time. <Grin>