You met me at a very strange time in my life.
In Lithuania, healthcare e-services went down after the basement where the servers were kept got flooded in a rainstorm. They went down for a couple of weeks.
dies after fall
Well, if that ain’t a whitewashed headline.
So I have lived in South Korea for 6 years now. The fact that this fire has had such a major impact is quite typical of Korean bureaucracy and tech administration. Very few backups, infrastructure held together with scotch tape and bubblegum, overworked devs and maintainers. It’s a bit sad, especially for a country that exports so many tech products.
Their explanation for having no backups was that 858 TB of data was “due to its large capacity”. They stored eight years of data without backups. Even with systems where they had backups, it sounds like there’s no redundancy – nobody can work because the single building where all the servers are located is currently out of order.
Sounds like the acute symptoms of chronic penny-pinching when it comes to IT infrastructure. I hope they take some good lessons from it at least. Just a shame that it’s such a devastating way to learn.
“we can’t ensure the data is safe because there’s too much of it”
…sounds like an especially big reason to figure something out, huh? Not to mention, 858 TB isn’t even that much for a whole ass government. For a consumer it might be 10$ per TB for a new drive, so it would be less when you’re a government, which makes it just a bit under 10 000 USD for a full backup. That’s it. Even if you budget in having to replace all drives once a year, 10 000 USD/yr is a bargain
New drives are more like 20€ per TB. Factor in redundancy with something like raid 5 and boom, off-site storage costs you a government toppling 20 000€ in hard drives.
And then you buy the storage from Dell or some other big player and the disk array with controllers, warranties and the kitchen sink costs 100k€. Which is still almost a rounding error on that scale.
Some moron deleted 75 TB of prod database the other day and sure that was catastrophic (for him, mostly) but it was backed up. We are a mid-size company, maybe a few hundred people across the country. I can’t imagine the governement of freaking Korea, land of fiber years before everyone else, running so short on storage they can’t do backups.
This shit is gonna go into It school books, like the OVH data center fire from 2018 (iirc)
I collect stories like this for when I need to make a case for purchasing new gear or services.
I know this is programmer_humour, but can we express some condolence for the manager who
jumpedfell from the building?Ironically, the Govt. official is a resource they have enough backups for.
Can this now go to darkhumor@lemmy.world ?
Yeah, I agree I was chuckling till I saw the last article and audibly said oooooh. A real sobering moment.
Oh, that’s more sad, I thought they had gotten the same treatment russians do.
This was propably some honor thing. Or avoidence of responsibilty. Or maybe taking one for the team as scape goat. I don’t know. Nevertheless, it is sad.
Edit: If you or someone you know is feeling emotionally distressed or struggling with thoughts of suicide, you can find international contacts at https://befrienders.org/.
Literally who ever said “backups are overrated”?
My ex company had for more than 10 years keept all the data customers shared with us. Structured and standardized, should have been easy peasy.
Somehow they were “appending wrong” in some way and data was useless. In think they were trying to reduce the size by aggregating a bit, but they did in a way that rendered the data useless.
Of course the CEO wanted to train models with it anyway…
10 years and no one bothered to pull some information at random? I mean generally companies have a schedule of assessments to ensure records. Even if it’s as simple as checksum.
The thing is they had data that expected to be slightly aggregated, do not a 1:1. The problem comes when you try to use the data for analysis and realize it didn’t make any sense
Everything is fine. Why would you need a backup?
The people who laid off 85% of the IT dept.
Penny counters who don’t like paying for storage
The person in charge of allotting budget. “You want how many thousands for backup solutions? Here, take this flashdrive I picked up in the parking lot and use it for backups, that should be plenty enough. I mean, how many bytes can our data be? Two, three maybe?”
I get it now.
If I had a nickel for every time someone didn’t backup their datacenter, I’d have two nickels.
Which isn’t a lot, but it’s weird that it happened at least twice.
Last time we lost disks at work, there were full backups.
They were just in the same disks as the data. And because everything is abstracted two times into virtual disks on virtual machines, and containers and volumes, the people responsible for the backups didn’t even know it.
But won’t you like…check? That the backups are own their own drive? The whole 3-2-1 rule kinda make you want to check this, no?
Or was it like they knew where the drives of the backups were, but they didn’t know those drives were being virtualized away and were in like production use?
I dunno what possibilities they actually had. But knowing the place, I can fully believe both that they weren’t allowed to check and that they never bothered.
The most likely scenario in my head was that they sent a request to the provisioning team asking for the volume to be in a different disk, and that detail never made into the technician actually doing the work (that sits on the next chair, but the requests have to come from the system).
(And the long term backups were fine. We lost 3 days of data.)






