HDD manufacturers use GB, which is a metric measurement, because its better for marketing while computers use GiB, which is a binary measurement. So people think they’re buying 15GiB but in reality they’re buying 13.5GiB marketed as 15GB
That’s not the only issue. Some flash drives have been found to completely misrepresent their sizes. There was something of an epidemic of them a few years ago, so much so that people started testing their drives after purchase (with tools eg Fight Flash Fraud). You could fill up the drive, then it would just completely fail as it did not actually have the storage capacity advertised.
Suffice it to say, the data storage industry isn’t without its own brand of shady practices.
The next level is that some flash drives reserve some of the space as a hot failover as memory cells die. Some have this separate from the advertised memory capacity, whereas others may report the total memory on the device even if it’s not available for direct use by the user.
So a double whammy of GB vs GiB and reserve flash memory to keep the drive going as cells die.
And then you have to put a filesystem on it, which has its own metadata – file attributes and folder/file names and so on. If you use NTFS you lose at least 12.5% to the metadata so now you’re down to 11.8 GiB. 😛
As an amusing side note, I once came across a joke compression program that could compress any data down to zero bytes. It did this by creating directories filled with zero-sized files whose filenames contained the actual data of the file in question.
If you right-clicked on the folder and asked the OS how big it was, it’d report 0 bytes. But of course all that data still had to be stored somewhere, in the metadata of the filesystem.
That’s part of why I use du on Linux instead of df/ls -l to figure out file/directory/partition usage. The former figures out actual size on disk, whereas the latter ignores metadata like the list of files in the directory.
It’s worse than that, the computer will still see 15GB, however when you fill it beyond 9GB everything will turn to shit and get corrupted. The idea being that this won’t happen until some time after the purchase, making it harder to return.
Someone should get one of these and dd copy all 0xdeadbeefs to the disk
Then dd it all off and confirm no corruption and it truly is the size it says.
Seen firmwares of shitty sd cards and drives lie about their storage capacity
Here is a 15GB card, btw it only has 9GB.
The issue with this is the difference between GB (1,000,000,000 bytes) and GiB (1,073,741,824 bytes) https://massive.io/file-transfer/gb-vs-gib-whats-the-difference/
HDD manufacturers use GB, which is a metric measurement, because its better for marketing while computers use GiB, which is a binary measurement. So people think they’re buying 15GiB but in reality they’re buying 13.5GiB marketed as 15GB
That’s not the only issue. Some flash drives have been found to completely misrepresent their sizes. There was something of an epidemic of them a few years ago, so much so that people started testing their drives after purchase (with tools eg Fight Flash Fraud). You could fill up the drive, then it would just completely fail as it did not actually have the storage capacity advertised.
Suffice it to say, the data storage industry isn’t without its own brand of shady practices.
Just as a side note for any reader that doesn’t already know it, the computer ones are 2 to the power of a multiple of 10.
So 1 kilobyte is 210 (which is 1024) bytes, 1 MiB is 220 (1048576) btes and so on.
So there is actually some logic behind the wierd looking numbers.
True, and adding the filesystem also takes off somewhat. That, however, doesn’t explain 15 vs 9 gb
The next level is that some flash drives reserve some of the space as a hot failover as memory cells die. Some have this separate from the advertised memory capacity, whereas others may report the total memory on the device even if it’s not available for direct use by the user.
So a double whammy of GB vs GiB and reserve flash memory to keep the drive going as cells die.
And then you have to put a filesystem on it, which has its own metadata – file attributes and folder/file names and so on. If you use NTFS you lose at least 12.5% to the metadata so now you’re down to 11.8 GiB. 😛
As an amusing side note, I once came across a joke compression program that could compress any data down to zero bytes. It did this by creating directories filled with zero-sized files whose filenames contained the actual data of the file in question.
If you right-clicked on the folder and asked the OS how big it was, it’d report 0 bytes. But of course all that data still had to be stored somewhere, in the metadata of the filesystem.
That’s part of why I use
du
on Linux instead ofdf
/ls -l
to figure out file/directory/partition usage. The former figures out actual size on disk, whereas the latter ignores metadata like the list of files in the directory.It’s worse than that, the computer will still see 15GB, however when you fill it beyond 9GB everything will turn to shit and get corrupted. The idea being that this won’t happen until some time after the purchase, making it harder to return.