No, you can’t just “host it in the cloud” and call that your Disaster Recovery plan. That’s the start of your disaster.
Featured image by Akela999 from Pixabay
Fly-by-night AGILE shops (okay, fly-by-night and AGILE is saying the same thing twice) always want to believe
Security is the other guy’s problem.
AGILE and Web company shops
They want to hire a group of low wage developers to badly connect a bunch of existing services then sit back and collect the money. They tend to ignore the fact that when their customer data is exposed due to a breach in/at/because of one of these services they are the ones liable, not the service. Likewise they completely ignore the problem of total data center loss. Every real company has to both file and test a Disaster Recovery Plan and for most real companies that centers around complete data center loss.
More than a decade ago Amazon had a grid power spike that fried a complete row of servers in their data center. There were nowhere near enough replacement servers on hand nor was there extra capacity in their data center. Businesses were off-line for weeks while the supplier made more boards/racks/whatever. Some number of them went under.
Saying “Oh, it’s Amazon’s problem” means you go out of business.
Reality
I’m sure there have been others that I haven’t ranted about. Why? Because data center disasters happen quite often. Who had a major data center in Texas during the recent massive power outage? I bet you only had several hours worth of fuel for your standby generators didn’t you? I bet that stuff was all froze up because you had it outside in the snow and ice. If it was diesel you probably didn’t bother adding anti-gel or Number One so what you had was a big tank of sticky goo, not diesel.
You can read about this data center catastrophe here.
Honestly I don’t know how you have a fire like that in a civilized country with building codes. Most computer rooms I’ve been in had halon systems.
It will be interesting to see just how many DOT-BOMB companies go belly up from this outage. I’m positive they were making backups the cloud provider’s problem too. Stupid people aren’t half lazy, they are completely lazy. When you are hacking on the fly for your code you don’t really plan.
Real companies not only backup to removable media, be it tape or disk, they send it off-site just for situations like this. I’m willing to wager an entire case of Diet Dew that this place was probably keeping them in the same building or another building on the same site. Why would I think that? Because that’s how mega data centers roll. Real companies write into the hosting contracts backups must be sent off-site, not just kept in another room or building on the same site. Why? Because a tornado, fire, flood, massive grid outage, other natural or terrorist disaster is generally going to screw up more than one building.
The dirty little secret everybody found out with the Amazon outage all those years ago was you can’t get to your data. That was just from a grid power problem that fried all the boards. Just how much data are you going to salvage when multiple buildings have fire damage?
Here is yet another post on clouds. The definition of insanity is doing the same thing over and over expecting a different outcome each time.