Blog

The Old Man’s Pendant II

“The Old Man’s Pendant” was a short film I created as a more involved and complex project to teach myself video editing. Prior to that, the only things that I had created were silly five-second joke clips or music videos spliced together from random things I pointed my camcorder at and polished with crude experimentation in visual effects. After my third (and most complicated) music video, I decided it was time to do something with more structure. Inspired by James Rolfe’s “How I Got Started” story and being stuck at home due to the ice and snow on the roads, I came up with nothing more than a crude idea for a short movie and started recording. I’ll spare you the details since I’m planning to make a documentary about my journey in the near future, but the bottom line is that such a small project which I expected to be finished in a few days at the most ended up taking around four months. It was a completely original work from scratch where I was learning not only video editing but writing and music composition. I also learned first-hand just how difficult it really is to put together a high-quality video project.

That movie was shot in January of 2016. One year later, stuck at home and snowed in yet again, I decided that it would be fun to shoot a sequel. My story ideas were better, my footage was more usable with less silly mistakes, and my skill levels had improved significantly. It’s not realistic to finish the post-production work in a week, but surely I could get it done within a month or two, right?

Instead of four months, this one took six months to complete. It’s funny how that works. Today, the sequel to my first short film went up on YouTube.

“The Old Man’s Pendant II” took so long to finish and was worth every bit of time it took. It is by far the most polished and interesting thing I’ve created to date. My least developed filmmaking skill is music composition, yet I’m still listening to my own soundtrack as I type this! The improvement between the films is obvious. While there are still plenty of things that could be polished in the final film, I try to avoid falling too hard into the perfectionist artist trap. At some point you need to stop fidgeting about with your creation, put it out there, and move on to the next big thing.

I’m very proud of my latest short film. I hope that you find it as enjoyable and satisfying as the process of creating it has been for me. I’ll be doing a documentary about my progress from the first film to the completion of the second one, so stay tuned for that.

If you’re still interested, feel free to watch the official trailer for The Old Man’s Pendant II, listen to and download the film’s soundtrack, watch the TOMP2 teaser video, and check out the behind-the-scenes teaser.

These videos are also available at Vidme: The Old Man’s Pendant II, TOMP2 Teaser, TOMP2 Behind the Scenes Teaser.

ZFS won’t save you: fancy filesystem fanatics need to get a clue about bit rot (and RAID-5)

UPDATE 3 (2020-01-01): I wrote this to someone on Reddit in a discussion about the ZFS/XFS/RAID-5 issue, and it does a good job of explaining why this article exists and why it’s presented in an argumentative tone. Please read it before you read the article below. Thanks, and have a wonderful 2020!

There really is no stopping zealots. Anyone who reads my article all the way through and takes the text at face value (rather than taking liberties with interpretation, as the abundant comments underneath it demonstrate) can see that I’m not actually dumping on ZFS nor saying that RAID-5 is the One True Way(tm). It really boils down to: ZFS is over-hyped, people who recommend it tend to omit the info that makes its protection capabilities practically useful, XFS is better for several use cases, RAID-5 is a good choice for a lot of lower-end people who don’t need fast rebuilds but is also not for everyone.

I strongly advocate for people using what fits their specific needs, and two years ago, there was a strong ZFS fanatical element on r/DataHoarder that was aggressively pushing ZFS as a data integrity panacea that all people should use, but leaving out critical things like RAID-Z being required for automatic repair capabilities. At the same time, I had read so many “DON’T USE RAID-5, IT SHOULD BE BANNED!” articles that I was tired of both of these camps.

The fact is that we have no useful figures on the prevalence of bit rot and there are a ton of built-in hardware safeguards against it already in place that so many fellow nerds typically don’t know about. Most people who experience bit rot will never know that that’s what happened, and if the rot is in “empty” space then no one will ever know it happened at all. There’s not some sort of central rot reporting authority, either. Backblaze’s disk failure reports are the closest thing we have to actual data on the subject. No one has enough information on bit rot to be “right.” In the absence of information, the human mind runs wild to fill in the blanks, and I think that’s where a good portion of this technology zealotry comes from.


UPDATE 2: Some fine folks on the openmediavault (OMV) forums disagreed with me and I penned a response which includes a reference to a scientific paper that backs several of my claims. Go check it out if you’re really bored. You know you want to! After all, who doesn’t love watching a nice trash fire on the internet?

UPDATE: Someone thought it’d be funny to submit this to Hacker News. It looks like I made some ZFS fans pretty unhappy. I’ll address some of the retorts posted on HN that didn’t consist of name-calling and personal attacks at the end of this article. And sorry, “OpenZFSonLinux,” I didn’t “delete the article after you rebuked what it said” as you so proudly posted; what I did was lock the post to private viewing while I added my responses, a process that doesn’t happen quickly when 33 of them exist. It’s good to know you’re stalking my posts though. It’s also interesting that you appear to have created a Hacker News user account solely for the purpose of said gloating. If this post has hurt your feelings that badly then you’re probably the kind of person it was written for.

It should also be noted that this is an indirect response to advice seen handed out on Reddit, Stack Overflow, and similar sites. For the grasping-at-straws-to-discredit-me HN nerds that can’t help but harp on the fact that “ZFS doesn’t use CRCs [therefore the author of this post is incompetent],” would you please feel free to tell that to all the people that say “CRC” when discussing ZFS? Language is made to communicate things and if I said “fletcher4” or “SHA256” they may not know what I’m talking about and think I’m the one who is clueless. Damned if you do, damned if you don’t.


tl;dr: Hard drives already do this, the risks of loss are astronomically low, ZFS is useless for many common data loss scenarios, start backing your data up you lazy bastards, and RAID-5 is not as bad as you think.


Bit rot just doesn’t work that way.

I am absolutely sick and tired of people in forums hailing ZFS (and sometimes btrfs which shares similar “advanced” features) as some sort of magical way to make all your data inconveniences go away. If you were to read the ravings of ZFS fanboys, you’d come away thinking that the only thing ZFS won’t do is install kitchen cabinets for you and that RAID-Z is the Holy Grail of ways to organize files on a pile of spinning rust platters.

In reality, the way that ZFS is spoken of by the common Unix-like OS user shows a gross lack of understanding of how things really work under the hood. It’s like the “knowledge” that you’re supposed to discharge a battery as completely as possible before charging it again which hasn’t gone away even though that was accurate for old Ni-Cd battery chemistry and will destroy your laptop or cell phone lithium-ion cells far faster than if you’d have just left it on the charger all the time. Bad knowledge that has spread widely tends to have a very hard time dying. This post shall serve as all of the nails AND the coffin for the ZFS and btrfs feature-worshiping nonsense we see today.

Side note: in case you don’t already know, “bit rot” is the phenomenon where data on a storage medium gets damaged because of that medium “breaking down” over time naturally. Remember those old floppies you used to store your photos on and how you’d get read errors on a lot of them ten years later? That’s sort of like how bit rot works, except bit rot is a lot scarier because it supposedly goes undetected, silently destroying your data and you don’t ever find out until it’s too late and even your backups are corrupted.

“ZFS has CRCs for data integrity”

A certain category of people are terrified of the techno-bogeyman named “bit rot.” These people think that a movie file not playing back or a picture getting mangled is caused by data on hard drives “rotting” over time without any warning. The magical remedy they use to combat this today is the holy CRC, or “cyclic redundancy check.” It’s a certain family of hash algorithms that produce a magic number that will always be the same if the data used to generate it is the same every time.

This is, by far, the number one pain in the ass statement out of the classic ZFS fanboy’s mouth and is the basis for most of the assertions that ZFS “protects your data” or “guards against bit rot” or other similar claims. While it is true that keeping a hash of a chunk of data will tell you if that data is damaged or not, the filesystem CRCs are an unnecessary and redundant waste of space and their usefulness is greatly over-exaggerated by hordes of ZFS fanatics.

Hard drives already do it better

Enter error-correcting codes (ECC.) You might recognize that term because it’s also the specification for a type of RAM module that has extra bits for error checking and correction. What the CRC Jesus clan don’t seem to realize is that all hard drives since the IDE interface became popular in the 1990s have ECC built into their design and every single bit of information stored on the drive is both protected by it and transparently rescued by it once in a while.

Hard drives (as well as solid-state drives) use an error-correcting code to protect against small numbers of bit flips by both detecting and correcting them. If too many bits flip or the flips happen in a very specific way, the ECC in hard drives will either detect an uncorrectable error and indicate this to the computer or the ECC will be thwarted and “rotten” data will successfully be passed back to the computer as if it was legitimate. The latter scenario is the only bit rot that can happen on the physical medium and pass unnoticed, but what did it take to get there? One bit flip will easily be detected and corrected, so we’re talking about a scenario where multiple bit flips happen in close proximity and in such a manner that it is still mathematically valid.

While it is a possible scenario, it is also very unlikely. A drive that has this many bit errors in close proximity is likely to be failing and the the S.M.A.R.T. status should indicate a higher reallocated sectors count or even worse when this sort of failure is going on. If you’re monitoring your drive’s S.M.A.R.T. status (as you should be) and it starts deteriorating, replace the drive!

Flipping off your CRCs

Note that in most of these bit-flip scenarios, the drive transparently fixes everything and the computer never hears a peep about it. ZFS CRCs won’t change anything if the drive can recover from the error. If the drive can’t recover and sends back the dreaded uncorrectable error (UNC) for the requested sector(s), the drive’s error detection has already done the job that the ZFS CRCs are supposed to do; namely, the damage was detected and reported.

What about the very unlikely scenario where several bits flip in a specific way that thwarts the hard drive’s ECC? This is the only scenario where the hard drive would lose data silently, therefore it’s also the only bit rot scenario that ZFS CRCs can help with. ZFS with CRC checking will detect the damage despite the drive failing to do so and the damage can be handled by the OS appropriately…but what has this gained us? Unless you’re using specific kinds of RAID with ZFS or have an external backup you can restore from, it won’t save your data, it’ll just tell you that the data has been damaged and you’re out of luck.

Hardware failure will kill your data

If your drive’s on-board controller hardware, your data cable, your power supply, your chipset with your hard drive interface inside, your RAM’s physical slot connection, or any other piece of the hardware chain that goes from the physical platters to the CPU have some sort of problem, your data will be damaged. It should be noted that SATA drive interfaces use IEEE 802.3 CRCs so the transmission from the drive CPU to the host system’s drive controller is protected from transmission errors. Using ECC RAM only helps with errors in the RAM itself, but data can become corrupted while being shuffled around in other circuits and the damaged values stored in ECC RAM will be “correct” as far as the ECC RAM is concerned.

The magic CRCs I keep making fun of will help with these failures a little more because the hard drive’s ECC no longer protects the data once the data is outside of a CRC/ECC capable intermediate storage location. This is the only remotely likely scenario that I can think of which would make ZFS CRCs beneficial.

…but again: how likely is this sort of hardware failure to happen without the state of something else in the machine being trashed and crashing something? What are the chances of your chipset scrambling the data only while the other millions of transistors and capacitors on the die remain in a functional and valid working state? As far as I’m concerned, not very likely.

Data loss due to user error, software bugs, kernel crashes, or power supply issues usually won’t be caught by ZFS CRCs at all. Snapshots may help, but they depend on the damage being caught before the snapshot of the good data is removed. If you save something and come back six months later and find it’s damaged, your snapshots might just contain a few months with the damaged file and the good copy was lost a long time ago. ZFS might help you a little, but it’s still no magic bullet.

Nothing replaces backups

By now, you’re probably realizing something about the data CRC gimmick: it doesn’t hold much value for data integrity and it’s only useful for detecting damage, not correcting it and recovering good data. You should always back up any data that is important to you. You should always keep it on a separate physical medium that is ideally not attached to the computer on a regular basis.

Back up your data. I don’t care about your choice of filesystem or what magic software you write that will check your data for integrity. Do backups regularly and make sure the backups actually work.

In all of my systems, I use the far less exciting XFS on Linux with metadata CRCs (once they were added to XFS) on top of a software RAID-5 array. I also keep external backups of all systems updated on a weekly basis. I run S.M.A.R.T. long tests on all drives monthly (including the backups) and about once a year I will test my backups against my data with a tool like rsync that has a checksum-based matching option to see if something has “rotted” over time.

All of my data loss tends to come from poorly typed ‘rm’ commands. I have yet to encounter a failure mode that I could not bounce back from in the past 10 years. ZFS and btrfs are complex filesystems with a few good things going for them, but XFS is simple, stable, and all of the concerning data loss bugs were ironed out a long time ago. It scales well and it performs better all-around than any other filesystem I’ve ever tested. I see no reason to move to ZFS and I strongly question the benefit of catching a highly unlikely set of bit damage scenarios in exchange for the performance hit and increased management complexity that these advanced features will cost me…and if I’m going to turn those features off, why switch in the first place?


Bonus: RAID-5 is not dead, stop saying it is

A related category of blind zealot is the RAID zealot, often following in the footsteps of the ZFS zealot or even occupying the same meat-suit. They loudly scream about the benefits of RAID-6, RAID-10, and fancier RAID configurations. They scorn RAID-5 for having terrible rebuild times, hype up the fact that “if a second drive dies while rebuilding, you lose everything!” They point at 10TB hard drives and do back-of-the-napkin equations and tell you about how dangerous and stupid it is to use RAID-5 and how their system that gives you less space on more drives is so much better.

Stop it, fanboys. You’re dead wrong and you’re showing your ignorance of good basic system administration practices.

I will concede that your fundamental points are mostly correct. Yes, RAID-5 can potentially have a longer rebuild time than multi-stripe redundant formats like RAID-6. Yes, losing a second drive after one fails or during a rebuild will lose everything on the array. Yes, a 32TB RAID-5 with five 8TB drives will take a long time to rebuild (about 50 hours at 180 MB/sec.) No, this isn’t acceptable in an enterprise server environment. Yes, the infamous RAID-5 write hole (where a stripe and its parity aren’t both updated before a crash or power failure and the data is damaged as as result) is a problem, though a very rare one to encounter in the real world. How do I, the smug techno-weenie advocating for dead old stupid RAID-5, counter these obviously correct points?

  • Longer rebuild time? This is only true if you’re using the drives for something other than rebuilding while it’s rebuilding. What you really mean is that rebuilding slows down less when you interrupt it with other work if you’re using RAID levels with more redundancy. No RAID exists that doesn’t slow down when rebuilding. If you don’t use it much during the rebuild, it’ll go a lot faster. No surprise there!
  • Losing a second drive? This is possible but statistically very unlikely. However, let’s assume you ordered a bunch of bad Seagates from the same lot number and you really do have a second failure during rebuild. So what? You should be backing up the data to an external backup, in which case this failure does not matter. RAID-6 doesn’t mean you can skip the backups. Are you really not backing up your array? What’s wrong with you?
  • RAID-5 in the enterprise? Yeah, that’s pretty much dead because of the rebuild process slowdown being worse. An enterprise might have 28 drives in a RAID-10 because it’s faster in all respects. Most of us aren’t an enterprise and can’t afford 28 drives in the first place. It’s important to distinguish between the guy building a storage server for a rack in a huge datacenter and the guy building a home server for video editing work (which happens to be my most demanding use case.
  • The RAID-5 “write hole?” Use an uninterruptible power supply (UPS). You should be doing this on any machine with important data on it anyway! Assuming you don’t use a UPS, Linux as of kernel version 4.4 has added journaling features for RAID arrays in an effort to close the RAID-5 write hole problem.

A home or small business user is better off with RAID-5 if they’re also doing backups like everyone should anyway. With a 7200 RPM 3TB drive (the best $/GB ratio in 7200 RPM drives as of this writing) costing around $95 each shipped, I can only afford so many drives. I know that I need at least three for a RAID-5 and I need double as many because I need to back that RAID-5 up, ideally to another machine with another identically sized RAID-5 inside. That’s a minimum of six drives for $570 to get two 6TB RAID-5 arrays, one main and one backup. I can buy a nice laptop or even build a great budget gaming desktop for that price, but for these storage servers I haven’t even bought the other components yet. To get 6TB in a RAID-6 or RAID-10 configuration, I’ll need four drives instead of three for each array, adding $190 to the initial storage drive costs. I’d rather spend that money on the other parts and in the rare instance that I must rebuild the array I can use the backup server to read from to reduce my rebuild time impact. I’m not worried about a few extra hours of rebuild.

Not everyone has thousands of dollars to allocate to their storage arrays or the same priorities. All system architecture decisions are trade-offs and some people are better served with RAID-5. I am happy to say, however, that if you’re so adamant that I shouldn’t use RAID-5 and should upgrade to your RAID levels, I will be happy to take your advice on one condition.

Buy me the drives with your own money and no strings attached. I will humbly and graciously accept your gift and thank you for your contribution to my technical evolution.

If you can add to the conversation, please feel free to comment. I want to hear your thoughts. Comments are moderated but I try to approve them quickly.


Update to address Hacker News respondents

First off, it seems that several Hacker News comments either didn’t read what I wrote, missed a few things, or read more into it than what I really said. I want to respond to some of the common themes that emerged in a general fashion rather than individually.

I am well aware that ZFS doesn’t exactly use “CRCs” but that’s how a lot of people refer to the error-checking data in ZFS colloquially so that’s the language I adopted; you pointing out that it’s XYZ algorithm or “technically not a CRC” doesn’t address anything that I said…it’s just mental masturbation to make yourself feel superior and it contributes nothing to the discussion.

I was repeatedly scolded for saying that the ZFS checksum feature is useless despite never saying that. I acknowledge that it does serve a purpose and use cases exist. My position is that I believe ZFS checksums constitute a lot of additional computational effort to protect against a few very unlikely hardware errors once the built-in error checking and correction in most modern hardware is removed from the overall picture. I used the word “most” in my “ZFS is useless for many common data loss scenarios” statement for a reason. This glossing over of important details is the reason I refer to such people as ZFS “zealots” or “fanboys.” Rather than taking the time to understand my position fully before responding, they quickly scanned the post for ways to demonstrate my clear ignorance of the magic of ZFS to the world and jumped all over the first thing that stood out.

 

kabdib related an anecdote where the RAM on a hard drive’s circuit board was flipping data bits in the cache portion and that the system involved used an integrity check similar to ZFS which is how the damage was detected. The last line sums up the main point: “Just asserting “CRCs are useless” is putting a lot of trust on stuff that has real-world failure modes. Remember that I didn’t assert that CRCs are useless; I specifically outlined where the ZFS checksum feature cannot be any more helpful than existing hardware integrity checks which is not the same thing. I question how common it is for hard drive RAM to flip only the bits in a data buffer/cache area without corrupting other parts of RAM that would cause the drive’s built-in software to fail. I’m willing to bet that there aren’t any statistics out there on such a thing. It’s good that a ZFS-like construct caught your hardware issue, but your obscure hard drive failure anecdote does not necessarily extrapolate out to cover billions of hard drives. Still, if you’re making an embedded device like a video game system and you can afford to add that layer of paranoia to it, I don’t think that’s a bad thing. Remember that the purpose of my post is to address those who blindly advocate ZFS as if it’s the blood of Computer Jesus and magically solves the problems of data integrity and bit rot.

rgbrenner offered indirect anecdotal evidence, repetitions of the lie that I asserted “CRCs are useless,” and then made a ridiculous attempt at insulting me: “If this guy wrote a filesystem (something that he pretends to have enough experience to critique), it would be an unreliable unusable piece of crap. Well then, “rgbrenner,” all I can say is that if you are so damned smart and have proof of this “unreliable and unusable” state that it’s in, file a bug against the filesystem I wrote and use on a daily basis for actual work so it can be fixed, and feel free to keep the condescending know-it-all attitude to yourself when you do so.

AstralStorm made a good point that I’ve also been trying to make: if your data is damaged in RAM that’s not used by ZFS, perhaps while the data is being edited in a program, it can be damaged while in RAM and ZFS will have no idea that it happened.

wyoung2 contributed a lot of information that was well-written and helpful. I don’t think I need to add anything to it, but it deserves some recognition since it’s a shining chunk of gold in this particular comment septic tank.

X86BSD said that “Consumer hardware is notriously busted. Even most of the enterprise hardware isn’t flawless. Firmware bugs etc. .” I disagree. In my experience the vast majority of hardware works as expected. Even most of the computers with every CPU regulator capacitor leaking electrolyte pass extended memory testing and CPU burn-in tests. Hard drives fail a lot more than other hardware does, sure, but even then the ECC does what it’s supposed to do and detects the error and reports it instead of handing over the broken data that failed the error check. I’d like some hard stats rather than anecdotes but I’m not even sure if they exist due to the huge diversity of failure scenarios that can come about.

asveikau recalls the hard drive random bit flipping problem hitting him as well. I don’t think that this anecdote has value because it’s a hard drive hardware failure. Sure, ZFS can catch it, but let’s remember that any filesystem would catch it because the filesystem metadata blocks will be read back with corruption too. XFS has optional metadata CRCs and those would catch this kind of disk failure so I don’t think ZFS can be considered much better for this failure scenario.

wyoung2 made another lengthy comment that requires me to add some details: I generally work only in the context of Linux md RAID (the raid5 driver specifically) so yes, there is a way to scrub the entire array: ‘echo check > /sys/block/md0/md/sync_action’. Also, if a Linux md RAID encounters a read error on a physical disk, the data is pulled from the remaining disk(s) and written back to the bad block, forcing the drive to either rewrite the data successfully or reallocate the sector which has the same effect; it no longer dumps a whole drive from the RAID on the basis of a single read error unless the attempts to do a “repair write” fail also. I can’t really comment on the anecdotal hardware problems discussed; I personally would not tolerate hardware that is faulty as described and would go well out of my way to fix the problem or replace the whole machine if no end was in sight. (I suppose this is a good time to mention that power supply issues and problems with power regulation can corrupt data…)

Yet another wyoung2 comment points out one big advantage ZFS has: if you use RAID that ZFS is aware of, ZFS checksums allow ZFS to know what block is actually bad when you check the array integrity. I actually mentioned this in my original post when I referenced RAID that ZFS pairs with. If you use a proper ZFS RAID setup then ZFS checksums become useful for data integrity; my focus was on the fact that without this ZFS-specific RAID setup the “ZFS protects your data” bullet-point is false. ZFS by itself can only tell you about corruption and it’s a dangerous thing to make people think the protection offered by a ZFS RAID setup is offered by ZFS by itself.

At this point I can only assume that rgbrenner just enjoys being a dick. And that, in contrast, AstralStorm understood what I was trying to say to at least some extent.

DiabloD3 quoted me on “RAID is not a replacement for backups” and then mentions ZFS external backup commands. Hey, uh, you realize that the RAID part was basically a separate post, right? In fact, there is not a single mention of ZFS in the RAID section of the post other than as a topic transition mechanism in the first paragraph. I included the RAID part because the ZFS religion and the RAID-over-5-only religion have the same “smell.”

I’ll have to finish this part later. It takes a lot of time to respond to criticism. Stay tuned for more. I have to stop so I can unlock the post and keep OpenZFSonLinux from eating off his own hands with anticipation. As a cliff-hanger, check this out…I enjoyed the stupidity of X86BSD’s second comment about me endangering my customers’ data [implicitly because I don’t use ZFS] so much that I changed my blog to integrate it and embrace how horrible of a person I am for not making my customers use ZFS with checksums on their single-disk Windows machines. If my destiny is to be “highly unethical” then I might as well embrace it.

Block YouTube ads at your OpenWRT router

UPDATE (2018-10-23): At some point over the past couple of months, YouTube started falling back in many cases to serving ads to TV and phone apps the same way that they serve videos, effectively bypassing all DNS-based ad blocking for YouTube apps. This probably does not work for you anymore, though I have noticed some ad breaks failing to load, so it may reduce the ad frequency even if it does not block them entirely.

I don’t have time to explain in depth how to set up OpenWRT in general. For you geeks who have already done it, here’s how you can block your smart TV and un-rooted phones and other devices from getting YouTube ads using your router!

In LuCI, go to Network – Firewall – Custom Rules and add this (change 192.168.0.1 to your router’s LAN IP address) and add this line and save/submit:

iptables -A PREROUTING -t nat -p udp --dport 53 -i br-lan -j DNAT --to 192.168.0.1:53

Add the following entries to /etc/hosts (change 192.168.0.1 to your router’s LAN IP address, or try 0.0.0.0 instead):

192.168.0.1 doubleclick.net
192.168.0.1 googleadservices.com
192.168.0.1 pagead2.googlesyndication.com
192.168.0.1 pubads.g.doubleclick.net
192.168.0.1 partnerad.l.doubleclick.net
192.168.0.1 beacons.extremereach.io
192.168.0.1 secure-us.imrworldwide.com
192.168.0.1 sb.scorecardresearch.com
192.168.0.1 secure.insightexpressai.com
192.168.0.1 googleads.g.doubleclick.net
192.168.0.1 ad.doubleclick.net
192.168.0.1 dart.l.doubleclick.net
192.168.0.1 dts.innovid.com
192.168.0.1 s0.2mdn.net
192.168.0.1 ade.googlesyndication.com

Google has a big list of ad servers so there may be more that I’ve missed, but after blocking these hosts I saw the ads stop without other problems.

[FIX] SD card reader dropping or “losing” cards when used in Windows 10

I have a fairly new laptop that came with Windows 8.1 and has a Realtek USB 2.0 SD card reader. After installing Windows 10 on it, at some point the SD card reader would show me the contents of the SD card, but then when I’d try to open files on the card it would randomly drop the card as if I had pulled it out and put it right back in. I thought it might be a dirty or loose connection in the SD card slot, but I blew the slot out and nothing changed. The card was a brand new card that was unboxed an hour earlier. The computer is only a year old and the card reader had rarely been used. Because this random card connection failure was very specific, I decided that the problem could be in software rather than hardware. I also knew of a couple of other people who had similar SD card problems after moving to Windows 10.

Here’s how I fixed the problem. I went to the manufacturer’s support site and downloaded the original Windows 8.1 driver for the Realtek USB SD card reader (Windows 10 can install drivers from Windows Vista, 7, 8, and 8.1 in most cases). I extracted the ZIP file (because that’s how they packaged it, obviously!) I opened Device Manager, found the card reader under Universal Serial Bus controllers, right-clicked and chose to update the driver. Instead of having Windows do the work for me, I said to “browse my computer for driver software” and to “let me pick from a list of drivers.” I clicked “Have disk…” and pointed it to the extracted folder where the driver was stored. The hard part is that you can’t just point it at the extracted folder itself; you must point it at the folder where the driver’s INF file(s) happen to be, which was actually a subfolder called “DrvBin64” for the Realtek card reader’s 64-bit Windows driver. From there it was just a matter of clicking “next” until the driver was installed.

To make sure Windows 10 didn’t auto-update the driver back to the bad version, I had to open the System control panel (right-click the Start button for a quick shortcut there), click “advanced system settings” on the left, click the “hardware” tab, and change the device installation settings to NOT install drivers automatically from Windows Update.

I can’t guarantee this will fix your SD card issues on Windows 10, but if it worked for me then it’s definitely worth a shot! Windows 10’s generic device drivers don’t always work 100% correctly with the hardware they support, but fortunately you have the option to force it to use the original driver that is known to work.

If this helped you (or didn’t help you) let everyone know in the comments below! Be sure to include your computer’s make and model number!

Let’s drive tech support scammers out of business by calling them repeatedly!

I’ve mentioned tech support scammers here before, and I think I’ve come up with the perfect solution to ruin their business model:

Waste all of their time.

If you run into a tech support scam website with an 800 number to call and you’re bored, give them a ring and make up a story! Have a great time leading the scammers on. Make some excuse about how your Internet is still dial-up and you can’t get YouTube to work. Tell them you need to renew your antivirus because you have the flu. Ask them if they offer sexual services. Whatever you can think of to waste their time and keep them from being available to scam someone else.

The number that I’ve seen most recently is (844) 544-1381.

Common tech support and “Microsoft” scams: don’t fall for them!

I have been seeing A LOT of people lately who have been caught in today’s most common computer scams.

I want to review them briefly and help you avoid making a mistake and giving control of your computer or bank account to a scammer. All of them are modern takes on the “snake oil” smoke-and-mirrors show from history designed to separate you from your money.

There are three ways that the latest wave of tech scams work:

  1. You get a random call from someone claiming to be from Microsoft or another large computer company, sometimes on all of your cell and home phones in a short time frame. They’re always sporting a fairly heavy foreign accent and phrase things strangely. They’ll tell you all kinds of stories about how terrible your computer is or how many viruses you’re leaking on the Internet. It’ll sound REALLY BAD. They’ll offer to help you fix it…for a price of course.
  2. The pop-up scary talking warning! Your browser loads an infected website or a malicious ad and gets kicked over to a HUGE SCARY WARNING that says your computer is infected and you need to call the number on the screen. If your speakers aren’t muted, it’ll also talk to you in a synthesized voice. If you call, you’ll get the same people as in (1) but this time they didn’t have to luck up and cold-call you, plus you’ll already be terrified so they can trick you into doing what they want.
  3. You call “tech support” for a large company like HP or Dell. You’re not really talking to an HP or Dell employee; you’re talking to an iYogi employee in India whose job is to sell you a support contract. I’m not sure if they’re the same people doing the other two, but it’s the same song and dance as the other two: you’ll get a nice show hyping up how horrible of a situation your computer is in and a hard sell on buying support from them.

In all of these situations, the person on the phone will want to use remote support tools such as TeamViewer or Citrix GoToAssist to get remote control of your computer. Once they have remote control, they are capable of doing ANYTHING THEY WANT to your computer, though they don’t usually seem to infect machines; it’s mainly a high-pressure sales pitch for $300 of computer snake oil.

CUT SCAMS OFF BEFORE THEY CAN AFFECT YOU.

For cold-call scammers in (1), hang up quickly. If they call again later, keep hanging up. The more they talk, the more likely it is that they’ll convince you to remote them in and pay up.

For the huge scary pop-up in (2), open Task Manager and kill your browser from there. If that’s not working out, just hold the power button on the computer for five seconds and it’ll shut off. Your computer IS NOT INFECTED. If it happens again after rebooting, try power-cycling your modem and router; these can get temporarily “infected” in a way that causes the computer to land on these scary sites quickly, but this “infection” doesn’t survive the power to the box being unplugged.

For the big corporate tech support calls in (3), it’s a bit more difficult because sometimes you’ll be talking to a legitimate support agent that isn’t going to try to scam you. The key things that tell you it’s going to be a scam are that they (A) want to get remote access to your computer without spending a lot of time trying to talk you through it first, (B) they tell you that your computer has serious problems and want to help you fix them, or (C) they mention money at any point in the process. IF ANY OF THESE THREE THINGS HAPPENS, try calling back or seek help from someone else that you trust. Make sure you’re calling the support phone number on the manufacturer’s official website as well!

Almost all of the computers I’ve checked in the past month that were targeted by these scams didn’t have any serious problems before or after the scammer got on, but many of my customers had to initiate chargebacks on their cards or change their bank accounts or get their cards exchanged which is frustrating and annoying.

If you’re in or near the Chatham County, Randolph County, Orange County, or Wake County areas of North Carolina and you’re concerned that your computer has been messed up by a scammer, you can get support from me at Tritech Computer Solutions in Siler City, including 100% free in-store diagnostics and repair quotes.

YouTube takes creator revenue from “not advertiser-friendly” videos, so why are ads running on them?

If you haven’t heard the buzz lately, here’s the deal: YouTube has gone on a massive campaign of stripping content creators’ videos of monetization for the content being “advertiser-unfriendly.” I don’t want to get into the details because they’ve been written about practically everywhere else online at this point. The bottom line is that YouTube won’t pay creators any ad revenue for videos that they deem as not “advertiser-friendly,” which you would think means that ads won’t be run at all on those videos since they’re obviously “not advertiser friendly.”

Let’s see if that’s true.

melanie_murphy_youtube_tweet

melanie_murphy_demonetized

melanie_murphy_youtube_search

melanie_murphy_youtube_ad

melanie_murphy_youtube_ad

YouTube is taking monetization from videos under the premise of “not advertiser friendly” and still running ads on the content, keeping all the money for themselves.

The key to faster shell scripts: know your shell’s features and use them!

I have a cleanup program that I’ve written as a Bash shell script. Over the years, it has morphed from a thing that just deleted a few fixed directories if they existed at all (mostly temporary file directories found on Windows) to a very flexible cleanup tool that can take a set of rules and rewrite and modify them to apply to multiple versions of Windows, along with safeguards that check the rules and auto-rewritten rules to prevent the equivalent of an “rm -rf /*” from happening. It’s incredibly useful for me; when I back up a customer’s PC data, I run the cleaner script first to delete many gigabytes of unnecessary junk and speed up the backup and restore process significantly.

Unfortunately, having the internal rewrite and safety check rules has the side effect of massively slowing the process. I’ve been tolerating the slowness for a long time, but as the rule set increased in size over the past few years the script has taken longer and longer to complete, so I finally decided to find out what was really going on and fix this speed problem.

Profiling shell scripts isn’t quite as easy as profiling C programs; with C, you can just use a tool like Valgrind to find out where all the effort is going, but shell scripts depend on the speed of the shell, the kernel, and the plethora of programs executed by the script, so it’s harder to follow what goes on and find the time sinks. However, I observed that a lot of time was spent in the steps between deleting items; since each rewrite and safety check is done on-the-fly as deletion rules are presented for processing, those were likely candidates. The first thing I wanted to know was how many times the script called an external program to do work; you can easily kill a shell script’s performance with unnecessary external program executions. To gather this info, I used the strace tool:

strace -f -o strace.txt tt_cleaner

This produced a file called “strace.txt” which contains every single system call issued by both the cleaner script and any forked programs. I then looked for the execve() system call and gathered the counts of the programs executed, excluding “execve resumed” events which aren’t actual execve() calls:

grep execve strace.txt | sed ‘s/.*execve/execve/’ | cut -d\” -f2 | grep -v resumed | sort | uniq -c | sort -g

The resulting output consisted of numbers below 100 until the last two lines, and that’s when I realized where the bottleneck might be:

4157 /bin/sed
11227 /usr/bin/grep

That’s a LOT of calls to sed, but the number of calls to grep was almost three times bigger, so that’s where I started to search for ways to improve. As I’ve said, the rewrite code takes each rule for deletion and rewrites it for other possible interpretations; “Username\Application Data” on Windows XP was moved to “Username\AppData\Roaming” on Vista and up, while “All Users\Application Data” was moved to “C:\ProgramData” in the same, plus there is a potential mirror of every single rule in “Username\AppData\Local\VirtualStore”. The rewrite code handles the expansion of the deletion rules to cover every single one of these possible cases. The outer loop of the rewrite engine grabs each rewrite rule in order while the inner loop does the actual rewriting to the current rule AND and all prior rewrites to ensure no possibilities are missed (VirtualStore is largely to blame for this double-loop architecture). This means that anything done within the inner loop is executed a huge number of times, and the very first command in the inner loop looked like this:

if echo “${RWNAMES[$RWNCNT]}” | grep -qi “${REWRITE0[$RWCNT]}”

This checks to see if the rewrite rule applies to the cleaner rule before doing the rewriting work. It calls grep once for every single iteration of the inner loop. I replaced this line with the following:

if [[ “${RWNAMES[$RWNCNT]}” =~ .*${REWRITE0[$RWCNT]}.* ]]

I had to also tack a “shopt -s nocasematch” to the top of the shell script to make the comparison case-insensitive. The result was a 6x speed increase. Testing on an existing data backup which had already been cleaned (no “work” to do) showed a consistent time reduction from 131 seconds to 22 seconds! The grep count dropped massively, too:

97 /usr/bin/grep

Bash can do wildcard and regular expression matching of strings (the =~ comparison operator is a regex match), so anywhere your shell script uses the “echo-grep” combination in a loop stands to benefit greatly by exploiting these Bash features. Unfortunately, these are not POSIX shell features and using them will lead to non-portable scripts, but if you will never use the script on other shells and the performance boost is significant, why not use them?

The bigger lesson here is that you should take some time to learn about the features offered by your shell if you’re writing advanced shell scripts.

Update: After writing this article, I set forth to eliminate the thousands of calls to sed. I was able to change an “echo-sed” combination to a couple of Bash substring substitutions. Try it out:

FOO=${VARIABLE/string_to_replace/replacement}

It accepts $VARIABLES where the strings go, so it’s quite powerful. Best of all, the total runtime dropped to 10.8 seconds for a total speed boost of over 11x!

Holy crap! You can go to prison for not paying your parents’ medical bills!

I just found this thread on the /r/LegalAdvice subreddit about a concept called “filial responsibility” which basically means that parents and/or their adult children can be held legally responsible for paying medical bills incurred by each other. Apparently 29 states in the USA have filial responsibility laws on the books but I (like many other people) have never heard a thing about them before today.

Filial responsibility is super draconian and scary shit.

Interest in filial responsibility laws have slowly resurfaced after finalization of a Pennsylvania court case where a son was held legally liable for his mother’s $93,000 nursing home bill. Before this case came about, these laws had long since fallen out of any sort of actual enforcement in a similar vein to anti-sodomy or “crime against nature” laws that technically make it a felony to have oral or anal sex with a human. I started digging a bit and found out that these laws could be a nasty time bomb in North Carolina because NC criminal law says that not taking care of your parents if the State decides you should be able to do so is grounds for giving you a criminal record.

That’s right! Don’t pay for your parents’ medical bills? Class 2 misdemeanor, have fun in prison.

I am a firm believer that no person in a free society should ever be held liable for debts (financial or moral) incurred by any other person and that debtors’ prisons should be completely abolished. If you believe the same thing, contact your state government representatives and make sure they know you want these laws stricken from the books.