Tag: data loss

Cute cloned dogs

A CHALLENGER APPEARS: “fclones”…fastest duplicate scanner ever? It’s complicated.

While perusing my link referrals on GitHub, I noticed this thread where my duplicate scanner jdupes was mentioned. I then noticed the comment below it:

There is also a much faster modern alternative to fdupes and jdupes: fclones. It searches for files in parallel and uses a much faster hash function than md5.

My response comment pretty much says it all, so I’m making that the entire remainder of this post.

I noticed that fclones does not do the byte-for-byte safety check that jdupes (and fdupes) does. It also relies exclusively on a non-cryptographic hash for comparisons. It is unsafe to rely on a non-cryptographic hash as a substitute for the file data, and comparisons between duplicate finders running in full-file comparison mode vs. running in hash-and-compare mode are not appropriate. The benchmark on the fclones page ran jdupes 1.14 without the -Q option that disables the final byte-for-byte confirmation, so there is a lot of extra work for the purpose of avoiding potential data loss being done by jdupes and being skipped entirely by fclones.

jdupes already uses a faster hash function than MD5 (xxHash64 as of this writing, previously jodyhash), and it is fairly trivial to switch to even faster hash functions if desired…but the fact is that once you switch to any “fast hash” function instead of a cryptographic one the hash function used is rarely a bottleneck, especially compared to the I/O bottleneck represented by most consumer-grade hard drives and low-end SSDs. If everything to be checked is in the buffer cache already then it might be a bottleneck, but the vast majority of duplicate scan use cases will be performed on data that is not cached.

Searching for files in parallel is only an advantage if the disk I/O is not a bottleneck, and you’ll notice that the fclones author performed the dedupe benchmarks on a (presumably very fast since it’s paired to a relatively recent Xeon) 512GB NVMe SSD with an extremely fast multi-core multi-threaded processor. There is a very small access time penalty for random read I/O on a fast NVMe SSD, but there is an extremely large access time penalty for random read I/O on a traditional rotating hard drive or RAID array composed of several hard drives. Any number of multiple threads firing off reads on the same RAID array at the same time will slow even most RAID arrays to a single-digit MB/sec death crawl. I understand that many people will be working with SSDs and some duplicate scanner programs will be a better choice for SSDs, but the majority of computer systems have spinning rust instead of flash-based disks.

It is strongly advisable to (A) run your own benchmarks on your specific workload and hardware, and (B) understand how to use the program within your own personal acceptable level of risk. Both of these are different for every different person’s needs.

UPDATE: I found another instance of the fclones author claiming jdupes being single-threaded makes it slow; to quote directly:

Unfortunately these older programs are single-threaded, and on modern hardware (particularly on SSDs) they are order of magnitude slower than they could be. If you have many files, a better option is to use fclones (disclaimer: I’m the author), which uses multiple threads to process many files in parallel and offers additional filtering options to restrict the search.

The points I’ve made above still stand. Unless you’re running the author’s enterprise-grade high-end hardware, your disk random access latency is your major limiting factor. I’d love to see what fclones does on something like a 24TB disk array. I’d wager–exactly as stated above–that 8 or 32 simultaneous I/O threads brings the whole process to a death crawl. Perhaps I should bite the bullet and run the tests myself.

UPDATE 2: I was right. Benchmark article and analysis forthcoming.

Featured image Licensed under CC-BY from Steve Jurvetson, https://www.flickr.com/photos/jurvetson/3327872958

ZFS won’t save you: fancy filesystem fanatics need to get a clue about bit rot (and RAID-5)

UPDATE 3 (2020-01-01): I wrote this to someone on Reddit in a discussion about the ZFS/XFS/RAID-5 issue, and it does a good job of explaining why this article exists and why it’s presented in an argumentative tone. Please read it before you read the article below. Thanks, and have a wonderful 2020!

There really is no stopping zealots. Anyone who reads my article all the way through and takes the text at face value (rather than taking liberties with interpretation, as the abundant comments underneath it demonstrate) can see that I’m not actually dumping on ZFS nor saying that RAID-5 is the One True Way(tm). It really boils down to: ZFS is over-hyped, people who recommend it tend to omit the info that makes its protection capabilities practically useful, XFS is better for several use cases, RAID-5 is a good choice for a lot of lower-end people who don’t need fast rebuilds but is also not for everyone.

I strongly advocate for people using what fits their specific needs, and two years ago, there was a strong ZFS fanatical element on r/DataHoarder that was aggressively pushing ZFS as a data integrity panacea that all people should use, but leaving out critical things like RAID-Z being required for automatic repair capabilities. At the same time, I had read so many “DON’T USE RAID-5, IT SHOULD BE BANNED!” articles that I was tired of both of these camps.

The fact is that we have no useful figures on the prevalence of bit rot and there are a ton of built-in hardware safeguards against it already in place that so many fellow nerds typically don’t know about. Most people who experience bit rot will never know that that’s what happened, and if the rot is in “empty” space then no one will ever know it happened at all. There’s not some sort of central rot reporting authority, either. Backblaze’s disk failure reports are the closest thing we have to actual data on the subject. No one has enough information on bit rot to be “right.” In the absence of information, the human mind runs wild to fill in the blanks, and I think that’s where a good portion of this technology zealotry comes from.


UPDATE 2: Some fine folks on the openmediavault (OMV) forums disagreed with me and I penned a response which includes a reference to a scientific paper that backs several of my claims. Go check it out if you’re really bored. You know you want to! After all, who doesn’t love watching a nice trash fire on the internet?

UPDATE: Someone thought it’d be funny to submit this to Hacker News. It looks like I made some ZFS fans pretty unhappy. I’ll address some of the retorts posted on HN that didn’t consist of name-calling and personal attacks at the end of this article. And sorry, “OpenZFSonLinux,” I didn’t “delete the article after you rebuked what it said” as you so proudly posted; what I did was lock the post to private viewing while I added my responses, a process that doesn’t happen quickly when 33 of them exist. It’s good to know you’re stalking my posts though. It’s also interesting that you appear to have created a Hacker News user account solely for the purpose of said gloating. If this post has hurt your feelings that badly then you’re probably the kind of person it was written for.

It should also be noted that this is an indirect response to advice seen handed out on Reddit, Stack Overflow, and similar sites. For the grasping-at-straws-to-discredit-me HN nerds that can’t help but harp on the fact that “ZFS doesn’t use CRCs [therefore the author of this post is incompetent],” would you please feel free to tell that to all the people that say “CRC” when discussing ZFS? Language is made to communicate things and if I said “fletcher4” or “SHA256” they may not know what I’m talking about and think I’m the one who is clueless. Damned if you do, damned if you don’t.


tl;dr: Hard drives already do this, the risks of loss are astronomically low, ZFS is useless for many common data loss scenarios, start backing your data up you lazy bastards, and RAID-5 is not as bad as you think.


Bit rot just doesn’t work that way.

I am absolutely sick and tired of people in forums hailing ZFS (and sometimes btrfs which shares similar “advanced” features) as some sort of magical way to make all your data inconveniences go away. If you were to read the ravings of ZFS fanboys, you’d come away thinking that the only thing ZFS won’t do is install kitchen cabinets for you and that RAID-Z is the Holy Grail of ways to organize files on a pile of spinning rust platters.

In reality, the way that ZFS is spoken of by the common Unix-like OS user shows a gross lack of understanding of how things really work under the hood. It’s like the “knowledge” that you’re supposed to discharge a battery as completely as possible before charging it again which hasn’t gone away even though that was accurate for old Ni-Cd battery chemistry and will destroy your laptop or cell phone lithium-ion cells far faster than if you’d have just left it on the charger all the time. Bad knowledge that has spread widely tends to have a very hard time dying. This post shall serve as all of the nails AND the coffin for the ZFS and btrfs feature-worshiping nonsense we see today.

Side note: in case you don’t already know, “bit rot” is the phenomenon where data on a storage medium gets damaged because of that medium “breaking down” over time naturally. Remember those old floppies you used to store your photos on and how you’d get read errors on a lot of them ten years later? That’s sort of like how bit rot works, except bit rot is a lot scarier because it supposedly goes undetected, silently destroying your data and you don’t ever find out until it’s too late and even your backups are corrupted.

“ZFS has CRCs for data integrity”

A certain category of people are terrified of the techno-bogeyman named “bit rot.” These people think that a movie file not playing back or a picture getting mangled is caused by data on hard drives “rotting” over time without any warning. The magical remedy they use to combat this today is the holy CRC, or “cyclic redundancy check.” It’s a certain family of hash algorithms that produce a magic number that will always be the same if the data used to generate it is the same every time.

This is, by far, the number one pain in the ass statement out of the classic ZFS fanboy’s mouth and is the basis for most of the assertions that ZFS “protects your data” or “guards against bit rot” or other similar claims. While it is true that keeping a hash of a chunk of data will tell you if that data is damaged or not, the filesystem CRCs are an unnecessary and redundant waste of space and their usefulness is greatly over-exaggerated by hordes of ZFS fanatics.

Hard drives already do it better

Enter error-correcting codes (ECC.) You might recognize that term because it’s also the specification for a type of RAM module that has extra bits for error checking and correction. What the CRC Jesus clan don’t seem to realize is that all hard drives since the IDE interface became popular in the 1990s have ECC built into their design and every single bit of information stored on the drive is both protected by it and transparently rescued by it once in a while.

Hard drives (as well as solid-state drives) use an error-correcting code to protect against small numbers of bit flips by both detecting and correcting them. If too many bits flip or the flips happen in a very specific way, the ECC in hard drives will either detect an uncorrectable error and indicate this to the computer or the ECC will be thwarted and “rotten” data will successfully be passed back to the computer as if it was legitimate. The latter scenario is the only bit rot that can happen on the physical medium and pass unnoticed, but what did it take to get there? One bit flip will easily be detected and corrected, so we’re talking about a scenario where multiple bit flips happen in close proximity and in such a manner that it is still mathematically valid.

While it is a possible scenario, it is also very unlikely. A drive that has this many bit errors in close proximity is likely to be failing and the the S.M.A.R.T. status should indicate a higher reallocated sectors count or even worse when this sort of failure is going on. If you’re monitoring your drive’s S.M.A.R.T. status (as you should be) and it starts deteriorating, replace the drive!

Flipping off your CRCs

Note that in most of these bit-flip scenarios, the drive transparently fixes everything and the computer never hears a peep about it. ZFS CRCs won’t change anything if the drive can recover from the error. If the drive can’t recover and sends back the dreaded uncorrectable error (UNC) for the requested sector(s), the drive’s error detection has already done the job that the ZFS CRCs are supposed to do; namely, the damage was detected and reported.

What about the very unlikely scenario where several bits flip in a specific way that thwarts the hard drive’s ECC? This is the only scenario where the hard drive would lose data silently, therefore it’s also the only bit rot scenario that ZFS CRCs can help with. ZFS with CRC checking will detect the damage despite the drive failing to do so and the damage can be handled by the OS appropriately…but what has this gained us? Unless you’re using specific kinds of RAID with ZFS or have an external backup you can restore from, it won’t save your data, it’ll just tell you that the data has been damaged and you’re out of luck.

Hardware failure will kill your data

If your drive’s on-board controller hardware, your data cable, your power supply, your chipset with your hard drive interface inside, your RAM’s physical slot connection, or any other piece of the hardware chain that goes from the physical platters to the CPU have some sort of problem, your data will be damaged. It should be noted that SATA drive interfaces use IEEE 802.3 CRCs so the transmission from the drive CPU to the host system’s drive controller is protected from transmission errors. Using ECC RAM only helps with errors in the RAM itself, but data can become corrupted while being shuffled around in other circuits and the damaged values stored in ECC RAM will be “correct” as far as the ECC RAM is concerned.

The magic CRCs I keep making fun of will help with these failures a little more because the hard drive’s ECC no longer protects the data once the data is outside of a CRC/ECC capable intermediate storage location. This is the only remotely likely scenario that I can think of which would make ZFS CRCs beneficial.

…but again: how likely is this sort of hardware failure to happen without the state of something else in the machine being trashed and crashing something? What are the chances of your chipset scrambling the data only while the other millions of transistors and capacitors on the die remain in a functional and valid working state? As far as I’m concerned, not very likely.

Data loss due to user error, software bugs, kernel crashes, or power supply issues usually won’t be caught by ZFS CRCs at all. Snapshots may help, but they depend on the damage being caught before the snapshot of the good data is removed. If you save something and come back six months later and find it’s damaged, your snapshots might just contain a few months with the damaged file and the good copy was lost a long time ago. ZFS might help you a little, but it’s still no magic bullet.

Nothing replaces backups

By now, you’re probably realizing something about the data CRC gimmick: it doesn’t hold much value for data integrity and it’s only useful for detecting damage, not correcting it and recovering good data. You should always back up any data that is important to you. You should always keep it on a separate physical medium that is ideally not attached to the computer on a regular basis.

Back up your data. I don’t care about your choice of filesystem or what magic software you write that will check your data for integrity. Do backups regularly and make sure the backups actually work.

In all of my systems, I use the far less exciting XFS on Linux with metadata CRCs (once they were added to XFS) on top of a software RAID-5 array. I also keep external backups of all systems updated on a weekly basis. I run S.M.A.R.T. long tests on all drives monthly (including the backups) and about once a year I will test my backups against my data with a tool like rsync that has a checksum-based matching option to see if something has “rotted” over time.

All of my data loss tends to come from poorly typed ‘rm’ commands. I have yet to encounter a failure mode that I could not bounce back from in the past 10 years. ZFS and btrfs are complex filesystems with a few good things going for them, but XFS is simple, stable, and all of the concerning data loss bugs were ironed out a long time ago. It scales well and it performs better all-around than any other filesystem I’ve ever tested. I see no reason to move to ZFS and I strongly question the benefit of catching a highly unlikely set of bit damage scenarios in exchange for the performance hit and increased management complexity that these advanced features will cost me…and if I’m going to turn those features off, why switch in the first place?


Bonus: RAID-5 is not dead, stop saying it is

A related category of blind zealot is the RAID zealot, often following in the footsteps of the ZFS zealot or even occupying the same meat-suit. They loudly scream about the benefits of RAID-6, RAID-10, and fancier RAID configurations. They scorn RAID-5 for having terrible rebuild times, hype up the fact that “if a second drive dies while rebuilding, you lose everything!” They point at 10TB hard drives and do back-of-the-napkin equations and tell you about how dangerous and stupid it is to use RAID-5 and how their system that gives you less space on more drives is so much better.

Stop it, fanboys. You’re dead wrong and you’re showing your ignorance of good basic system administration practices.

I will concede that your fundamental points are mostly correct. Yes, RAID-5 can potentially have a longer rebuild time than multi-stripe redundant formats like RAID-6. Yes, losing a second drive after one fails or during a rebuild will lose everything on the array. Yes, a 32TB RAID-5 with five 8TB drives will take a long time to rebuild (about 50 hours at 180 MB/sec.) No, this isn’t acceptable in an enterprise server environment. Yes, the infamous RAID-5 write hole (where a stripe and its parity aren’t both updated before a crash or power failure and the data is damaged as as result) is a problem, though a very rare one to encounter in the real world. How do I, the smug techno-weenie advocating for dead old stupid RAID-5, counter these obviously correct points?

  • Longer rebuild time? This is only true if you’re using the drives for something other than rebuilding while it’s rebuilding. What you really mean is that rebuilding slows down less when you interrupt it with other work if you’re using RAID levels with more redundancy. No RAID exists that doesn’t slow down when rebuilding. If you don’t use it much during the rebuild, it’ll go a lot faster. No surprise there!
  • Losing a second drive? This is possible but statistically very unlikely. However, let’s assume you ordered a bunch of bad Seagates from the same lot number and you really do have a second failure during rebuild. So what? You should be backing up the data to an external backup, in which case this failure does not matter. RAID-6 doesn’t mean you can skip the backups. Are you really not backing up your array? What’s wrong with you?
  • RAID-5 in the enterprise? Yeah, that’s pretty much dead because of the rebuild process slowdown being worse. An enterprise might have 28 drives in a RAID-10 because it’s faster in all respects. Most of us aren’t an enterprise and can’t afford 28 drives in the first place. It’s important to distinguish between the guy building a storage server for a rack in a huge datacenter and the guy building a home server for video editing work (which happens to be my most demanding use case.
  • The RAID-5 “write hole?” Use an uninterruptible power supply (UPS). You should be doing this on any machine with important data on it anyway! Assuming you don’t use a UPS, Linux as of kernel version 4.4 has added journaling features for RAID arrays in an effort to close the RAID-5 write hole problem.

A home or small business user is better off with RAID-5 if they’re also doing backups like everyone should anyway. With a 7200 RPM 3TB drive (the best $/GB ratio in 7200 RPM drives as of this writing) costing around $95 each shipped, I can only afford so many drives. I know that I need at least three for a RAID-5 and I need double as many because I need to back that RAID-5 up, ideally to another machine with another identically sized RAID-5 inside. That’s a minimum of six drives for $570 to get two 6TB RAID-5 arrays, one main and one backup. I can buy a nice laptop or even build a great budget gaming desktop for that price, but for these storage servers I haven’t even bought the other components yet. To get 6TB in a RAID-6 or RAID-10 configuration, I’ll need four drives instead of three for each array, adding $190 to the initial storage drive costs. I’d rather spend that money on the other parts and in the rare instance that I must rebuild the array I can use the backup server to read from to reduce my rebuild time impact. I’m not worried about a few extra hours of rebuild.

Not everyone has thousands of dollars to allocate to their storage arrays or the same priorities. All system architecture decisions are trade-offs and some people are better served with RAID-5. I am happy to say, however, that if you’re so adamant that I shouldn’t use RAID-5 and should upgrade to your RAID levels, I will be happy to take your advice on one condition.

Buy me the drives with your own money and no strings attached. I will humbly and graciously accept your gift and thank you for your contribution to my technical evolution.

If you can add to the conversation, please feel free to comment. I want to hear your thoughts. Comments are moderated but I try to approve them quickly.


Update to address Hacker News respondents

First off, it seems that several Hacker News comments either didn’t read what I wrote, missed a few things, or read more into it than what I really said. I want to respond to some of the common themes that emerged in a general fashion rather than individually.

I am well aware that ZFS doesn’t exactly use “CRCs” but that’s how a lot of people refer to the error-checking data in ZFS colloquially so that’s the language I adopted; you pointing out that it’s XYZ algorithm or “technically not a CRC” doesn’t address anything that I said…it’s just mental masturbation to make yourself feel superior and it contributes nothing to the discussion.

I was repeatedly scolded for saying that the ZFS checksum feature is useless despite never saying that. I acknowledge that it does serve a purpose and use cases exist. My position is that I believe ZFS checksums constitute a lot of additional computational effort to protect against a few very unlikely hardware errors once the built-in error checking and correction in most modern hardware is removed from the overall picture. I used the word “most” in my “ZFS is useless for many common data loss scenarios” statement for a reason. This glossing over of important details is the reason I refer to such people as ZFS “zealots” or “fanboys.” Rather than taking the time to understand my position fully before responding, they quickly scanned the post for ways to demonstrate my clear ignorance of the magic of ZFS to the world and jumped all over the first thing that stood out.

 

kabdib related an anecdote where the RAM on a hard drive’s circuit board was flipping data bits in the cache portion and that the system involved used an integrity check similar to ZFS which is how the damage was detected. The last line sums up the main point: “Just asserting “CRCs are useless” is putting a lot of trust on stuff that has real-world failure modes. Remember that I didn’t assert that CRCs are useless; I specifically outlined where the ZFS checksum feature cannot be any more helpful than existing hardware integrity checks which is not the same thing. I question how common it is for hard drive RAM to flip only the bits in a data buffer/cache area without corrupting other parts of RAM that would cause the drive’s built-in software to fail. I’m willing to bet that there aren’t any statistics out there on such a thing. It’s good that a ZFS-like construct caught your hardware issue, but your obscure hard drive failure anecdote does not necessarily extrapolate out to cover billions of hard drives. Still, if you’re making an embedded device like a video game system and you can afford to add that layer of paranoia to it, I don’t think that’s a bad thing. Remember that the purpose of my post is to address those who blindly advocate ZFS as if it’s the blood of Computer Jesus and magically solves the problems of data integrity and bit rot.

rgbrenner offered indirect anecdotal evidence, repetitions of the lie that I asserted “CRCs are useless,” and then made a ridiculous attempt at insulting me: “If this guy wrote a filesystem (something that he pretends to have enough experience to critique), it would be an unreliable unusable piece of crap. Well then, “rgbrenner,” all I can say is that if you are so damned smart and have proof of this “unreliable and unusable” state that it’s in, file a bug against the filesystem I wrote and use on a daily basis for actual work so it can be fixed, and feel free to keep the condescending know-it-all attitude to yourself when you do so.

AstralStorm made a good point that I’ve also been trying to make: if your data is damaged in RAM that’s not used by ZFS, perhaps while the data is being edited in a program, it can be damaged while in RAM and ZFS will have no idea that it happened.

wyoung2 contributed a lot of information that was well-written and helpful. I don’t think I need to add anything to it, but it deserves some recognition since it’s a shining chunk of gold in this particular comment septic tank.

X86BSD said that “Consumer hardware is notriously busted. Even most of the enterprise hardware isn’t flawless. Firmware bugs etc. .” I disagree. In my experience the vast majority of hardware works as expected. Even most of the computers with every CPU regulator capacitor leaking electrolyte pass extended memory testing and CPU burn-in tests. Hard drives fail a lot more than other hardware does, sure, but even then the ECC does what it’s supposed to do and detects the error and reports it instead of handing over the broken data that failed the error check. I’d like some hard stats rather than anecdotes but I’m not even sure if they exist due to the huge diversity of failure scenarios that can come about.

asveikau recalls the hard drive random bit flipping problem hitting him as well. I don’t think that this anecdote has value because it’s a hard drive hardware failure. Sure, ZFS can catch it, but let’s remember that any filesystem would catch it because the filesystem metadata blocks will be read back with corruption too. XFS has optional metadata CRCs and those would catch this kind of disk failure so I don’t think ZFS can be considered much better for this failure scenario.

wyoung2 made another lengthy comment that requires me to add some details: I generally work only in the context of Linux md RAID (the raid5 driver specifically) so yes, there is a way to scrub the entire array: ‘echo check > /sys/block/md0/md/sync_action’. Also, if a Linux md RAID encounters a read error on a physical disk, the data is pulled from the remaining disk(s) and written back to the bad block, forcing the drive to either rewrite the data successfully or reallocate the sector which has the same effect; it no longer dumps a whole drive from the RAID on the basis of a single read error unless the attempts to do a “repair write” fail also. I can’t really comment on the anecdotal hardware problems discussed; I personally would not tolerate hardware that is faulty as described and would go well out of my way to fix the problem or replace the whole machine if no end was in sight. (I suppose this is a good time to mention that power supply issues and problems with power regulation can corrupt data…)

Yet another wyoung2 comment points out one big advantage ZFS has: if you use RAID that ZFS is aware of, ZFS checksums allow ZFS to know what block is actually bad when you check the array integrity. I actually mentioned this in my original post when I referenced RAID that ZFS pairs with. If you use a proper ZFS RAID setup then ZFS checksums become useful for data integrity; my focus was on the fact that without this ZFS-specific RAID setup the “ZFS protects your data” bullet-point is false. ZFS by itself can only tell you about corruption and it’s a dangerous thing to make people think the protection offered by a ZFS RAID setup is offered by ZFS by itself.

At this point I can only assume that rgbrenner just enjoys being a dick. And that, in contrast, AstralStorm understood what I was trying to say to at least some extent.

DiabloD3 quoted me on “RAID is not a replacement for backups” and then mentions ZFS external backup commands. Hey, uh, you realize that the RAID part was basically a separate post, right? In fact, there is not a single mention of ZFS in the RAID section of the post other than as a topic transition mechanism in the first paragraph. I included the RAID part because the ZFS religion and the RAID-over-5-only religion have the same “smell.”

I’ll have to finish this part later. It takes a lot of time to respond to criticism. Stay tuned for more. I have to stop so I can unlock the post and keep OpenZFSonLinux from eating off his own hands with anticipation. As a cliff-hanger, check this out…I enjoyed the stupidity of X86BSD’s second comment about me endangering my customers’ data [implicitly because I don’t use ZFS] so much that I changed my blog to integrate it and embrace how horrible of a person I am for not making my customers use ZFS with checksums on their single-disk Windows machines. If my destiny is to be “highly unethical” then I might as well embrace it.