The problem of finding and handling duplicate files has been with us for a long time. Since the end of the year 1999, the de facto answer to “how can I find and delete duplicate files?” for Linux and BSD users has been a program called ‘fdupes’ by Adrian Lopez. This venerable staple of system administrators is extremely handy when you’re trying to eliminate redundant data to reclaim some disk space, clean up a code base full of copy-pasted files, or delete photos you’ve accidentally copied from your digital camera to your computer more than once. I’ve been quite grateful to have it around–particularly when dealing with customer data recovery scenarios where every possible copy of a file is recovered and the final set ultimately contains thousands of unnecessary duplicates.
Unfortunately, development on Adrian’s fdupes had, for all practical purposes, ground to a halt. From June 2014 to July 2015, the only significant functional changes to the code have been modification to compile on Mac OS X. The code’s stagnant nature has definitely shown itself in real-world tests; in February 2015, Eliseo Papa published “What is the fastest way to find duplicate pictures?” which contains benchmarks of 15 duplicate file finders (including an early version of my fork which we’ll ignore for the moment) that places the original fdupes dead last in operational speed and shows it to be heavily CPU-bound rather than I/O-bound. In fact, Eliseo’s tests say that fdupes takes a minimum of 11 times longer to run than 13 of the other duplicate file finders in the benchmark!
As a heavy user of the program on fairly large data sets, I had noticed the poor performance of the software and became curious as to why it was so slow for a tool that should simply be comparing pairs of files. After inspecting the code base, I found a number of huge performance killers:
- Tons of time was wasted waiting on progress to print to the terminal
- Many performance-boosting C features weren’t used (static, inline, etc)
- A couple of one-line functions were very “hot,” adding heavy call overhead
- Using MD5 for file hashes was slower than other hash functions
- Storing MD5 hashes as strings instead of binary data was inefficient
- A “secure” hash like MD5 isn’t needed; matches get checked byte-for-byte
I submitted a pull request to the fdupes repository which solved these problems in December 2014. Nothing from the pull request was discussed on Github and none of the fixes were incorporated into fdupes. I emailed Adrian to discuss my changes with him directly and there was some interest in certain changes, but in the end nothing was changed and my emails became one-way.
It seemed that fdupes development was doomed to stagnation.
In the venerable traditions of open source software. I forked it and gave my new development tree a new name to differentiate it from Adrian’s code: jdupes. I solved the six big problems outlined above with these changes:
- Rather than printing progress indication for every file examined, I added a delay counter to drastically reduce terminal printing. This was a much bigger deal when using SSH.
- I switched the code and compilation process to use C99 and added relevant keywords to improve overall performance.
- The “hot” one-line functions were changed to #define functions to chop function call overhead for them in half.
- (Also covers 5 and 6) I wrote my own hash function and replaced all of the MD5 code with it, resulting in a benchmarked speed boost of approximately 17%. The resulting hashes are passed around as a 64-bit unsigned integer, not an ASCII string, which (on 64-bit machines) reduces hash comparisons to a single compare instruction.
After forking all of these changes and enjoying the massive performance boost they brought about, I felt motivated to continue looking for potential improvements. I didn’t realize at the time that a simple need to eliminate duplicate files more quickly would morph into spending the next half-year ruthlessly digging through the code for ways to make things better. Between the initial pull request that led to the fork and Eliseo Papa’s article, I managed to get a lot done:
- Reduce four stat() calls for the exact same file in rapid succession to only one stat() call and eliminate all of the short functions that do nothing but call stat()
- Port jdupes to Windows by making it compile properly with MinGW; now I could run fdupes on the machine doing the data recovery instead of being forced to boot to Linux or copy to a Linux system
- Port jdupes to Mac OS X by removing a call that always failed on OS X
- Brought in a long-standing patch to allow hard linking of duplicate file sets
At this point, Eliseo published his February 19 article on the fastest way to find duplicates. I did not discover the article until July 8 of the same year (at which time jdupes was at least three versions higher than the one being tested), so I was initially disappointed with where jdupes stood in the benchmarks relative to some of the other tested programs, but even the early jdupes (version 1.51-jody2) code was much faster than the original fdupes code for the same job.
1.5 months into development, jdupes was 19 times faster in a third-party test than the code it was forked from.
Nothing will make your programming efforts feel more validated than seeing something like that from a total stranger.
Between the publishing of the article and finding the article, I had continued to make heavy improvements:
- Got rid of unnecessary malloc() calls and other relics that were left behind by the MD5 code
- Improved name sorting to make automated deletions more likely to delete less favorably named duplicates
- Don’t hash small files twice! This was wasting a surprising amount of time.
- Heavily reduce disk “thrashing” during file comparisons by reading 1MB at a time instead of 8KB
- Though it seems obvious, hard linked files are always duplicates and should be treated as such when “consider hard links” is enabled…but they weren’t treated that way until I added this optimization
- Fixed all code that violated strict aliasing rules and enabled strict aliasing
- Massively cut down on calls to stat(), a problem in fdupes which I noticed during an strace many years prior
- Wrote the string_table memory allocator to cut out most of the malloc() overhead
When I found Eliseo’s article from February, I sent him an email inviting him to try out jdupes again:
I have benchmarked jdupes 1.51-jody4 from March 27 against jdupes 1.51-jody6, the current code in the Git repo. The target is a post-compilation directory for linux-3.19.5 with 63,490 files and 664 duplicates in 152 sets. A “dry run” was performed first to ensure all files were cached in memory first and remove variances due to disk I/O. The benchmarking was as follows:
$ ./compare_fdupes.sh -nrq /usr/src/linux-3.19.5/
Five sequential runs were consistently close (about ± 0.020s) to these times.
In half a year of casual spare-time coding, I had made fdupes 32 times faster.
There’s probably not a lot more performance to be squeezed out of jdupes today. Most of my work on the code has settled down into working on new features and improving Windows support. In particular, Windows has supported hard linked files for a long time, and I’ve taken full advantage of Windows hard link support. I’ve also made the progress indicator much more informative to the user. At this point in time, I consider the majority of my efforts complete. jdupes has even gained inclusion as an available program in Arch Linux.
Out of the efforts undertaken in jdupes, I have gained benefits for other projects as well. For example, I can see the potential for using the string_table allocator in other projects that don’t need to free() string memory until the program exits. Most importantly, my overall experience with working on jdupes has improved my overall programming skills tremendously and I have learned a lot more than I could have imagined would come from improving such a seemingly simple file management tool.
If you’d like to use jdupes, feel free to download one of my binary releases for Linux, Windows, and Mac OS X. You can find them here.