Blog

The Real Reason Tech Culture “Hates Feminism”

I wrote a lengthy comment in response to a Wired Opinion article [EDIT: It appears to have since been deleted; so much for “discussion” eh?] called Donglegate: Why the Tech Community Hates Feminists which has a totally different description in the URL that says “richards-affair-and-misogyny-in-tech” (a description which is a more genuine description of the article.) The article is largely a repetition of radical feminist doctrine which ignores the very simple core of what brought the Adria Richards PyCon disaster about: Adria bullied two men by shaming them in the court of public opinion and then hid under a cloak of feminism and social justice to avoid consequences for her bad behavior. There seems to be a total lack of understanding as to why people in tech culture are vehemently opposed to modern “third-wave” radical feminism working its way into a cubicle or message board near them, and I thought it would be good to shed some light on the subject. Tech culture doesn’t hate women and doesn’t hate traditional feminism in terms of equal opportunity and treatment, but it doesn’t tolerate radical feminism, and that’s where the line is drawn. The comment reads as follows:

The tech community is full of people who don’t like walking on eggshells just because someone is overly sensitive and gets offended at the drop of a hat. Gender has nothing to do with it. This situation would be no different if a white male took the exact same actions. Gender is irrelevant. Tech people generally don’t see the world through -ism colored glasses in the first place. What articles like this (and people like Adria) are trying to do is force us technical types to wear those glasses, and we outright refuse. Everyone is equal in my eyes at first. It’s when they start speaking that the criticisms start to mount, and while techies tend to pull no punches in an argument, we’re used to that style of debate, where it’s all thrown out there immediately with no editing or sugar-coating, we hash it out, find somewhere to agree, and it’s over with.

Adria bypassed this. Instead of saying “I have a problem with that,” she attempted to try them in the court of public opinion. Techies don’t like the court of public opinion because it ignores the merits of the core issues and immediately favors whoever tells the best story or has the most favored reputation. Adria Richards immediately loses on the fundamental problem with the situation. Feminists lose because techies don’t accept their premises in the first place, and knowing that brick walls are devoid of logic and cannot be argued with, instead tell them to toss off.

The truth is that the vast majority of people know modern radical feminist rhetoric and the cleverly crafted jargon that comes with it are, in a word, bullshit. Techies are particularly sensitive to this. Feminism, being a term that is gender-biased and therefore favoring some people over others for factors they cannot (easily) change, is viewed in the tech world as a radical religious belief of sorts, one not to dignify with any meaningful response.

Consider this: anyone who is in tech today and over 25 remembers a time when everyone had a handle or screen name, and you didn’t know if the other person on IRC, AIM, Yahoo chat, etc. was male or female, young or old, white or black, able or handicapped, across the street or across the ocean. We’ve spent a large amount of time talking to people who we only knew by cryptic pseudonyms. We didn’t know nor care about these things. We spent a lot of time in an environment where equality was the default.

The article is telling us, a generation or two that already see everyone as equals, that if we’re men, we’re treating women poorly by default. We call bullshit, because it’s bullshit. When that doesn’t shut down the argument, we ask “so what can I PERSONALLY DO as a solution to this problem?” and we get nothing usable in response. This article pounds out alleged problems in painstaking detail, and yet offers no real solutions that the average programmer in his cubicle can put to use. Until workable, reasonable solutions are offered, all of this radical feminist macroaggression towards the male gender will forever be of no real-world value and fall upon deaf ears.

I would also like to point out that while I disagree with the majority of what the article’s author has written, I have also defended her in at least one comment. Criticism of the article is potentially productive, but criticism (particularly name-calling and other immaturity) of the person just because the article is not in agreement with your opinions is bad for everyone. We all need to learn to respectfully disagree, with an emphasis on respect. Also, someone else’s bad behavior does not justify your own. Try to play nicely with others, and we won’t have so many Donglegates in the future.

[SOLVED] Adobe InDesign CS6 crashes on “Starting Service Registry”

UPDATE: Comments have been left telling me that this fix continues to be relevant even with the newest Adobe InDesign CC 2019 version, so the problem likely includes InDesign CC 2018, 2017, 2016, and 2015, and probably InDesign CS4, CS5, and CS5.5 as well.

Adobe InDesign CS6 on my Windows 7 64-bit laptop with a generally very error-free installation (I own a computer shop, so why wouldn’t it be ultra clean?) insisted on crashing literally every single time I tried to start it recently. I did some tracing of what the program did right before it crashed out and found the problem.

Adobe InDesign CS6, upon “Starting Service Registry,” is probing the default printer, crashing out if the default printer is a network printer that can’t be reached (and thus queried for capabilities or settings or whatever else.)

I use PuTTY with SSH port forwarding for port 631 enabled to be able to print to the office printer from my house while I’m in a SSH session (via the Common UNIX Printing System, CUPS). There is no printer at my house at all, as I have no need for one, so my default printer is a network printer on “localhost:631” using Internet Printing Protocol (IPP), which is “disconnected” if I don’t happen to have PuTTY connected to the office workstation at the time. The problem is that InDesign dies horribly when it queries this default printer at startup and the printer is “missing.” I verified this by connecting with PuTTY to the office, thus making the default printer available again, and the error went away.

SOLUTION: If you are having this problem, see if your default printer is off, disconnected, or a network printer that can’t be reached. In the worst case, try changing the default printer to a virtual printer such as the XPS document writer or Adobe PDF printer, so that the default printer is always available when InDesign starts.

I’d also like to note that Adobe is far from unique in the “default printer problems equals program startup weirdness” category; I’ve seen Microsoft Office applications start extremely slowly, as well as other programs throw errors or crash at startup, all because they’re querying printers at startup and developers clearly never test for “what if the default printer is off or unreachable?” contingencies. I would love to see developers take such things into account more often, because this class of bugs affects more people than one might realize, particularly in corporate environments where “the network printer” might have been turned off for some reason, or on laptops where the office network is not connected.

For reference, my Windows Error Reporting log in the Event Viewer shows the following information for this error:

Faulting application name: InDesign.exe, version: 8.0.0.370, time stamp: 0x4f72c3ee
Faulting module name: AGM.dll, version: 4.26.18.19553, time stamp: 0x4f3a0265
Exception code: 0xc0000005
Fault offset: 0x0024d0cd
Faulting process id: 0xe2c
Faulting application start time: 0x01ce07030b2c00c8
Faulting application path: C:\Program Files (x86)\Adobe\Adobe InDesign CS6\InDesign.exe
Faulting module path: C:\Program Files (x86)\Adobe\Adobe InDesign CS6\AGM.dll

Why I’ll never build a home in Chatham County, NC

I have lived in Siler City, NC (in Chatham County, NC) for four years. Having established a solid commercial presence here and finding the area to be generally decent and agreeable to live in, I’ve been seriously looking into the process of establishing a more permanent residence. However, every time I look up more information regarding the process, I see more reasons to avoid Chatham County for establishing any kind of permanent residence. The reasons are many and varied, but I can chalk the biggest one up to one major factor that causes me more concern than any other. What is this major issue that single-handedly doomed my fantasies of building a home on some undeveloped Chatham County land?

Impact fees.

That’s right, impact fees. Something which I’d never once heard of before I came here. I’ve looked at land in Oxford, NC in the past, as well as various other counties north and northwest of Orange County, and not once have I heard of “impact fees.” What’s an “impact fee” supposed to be for, anyway? Apparently, it’s a one-time county government surcharge (read: “TAX”) that’s supposed to raise money for building or maintaining schools. You know, like elementary, middle, and high schools…for the children I shall never ever produce. And boy, these kids I don’t and won’t have would cost me a ton. How much, you ask?

Chatham County’s impact fee is a one-time fee of $3,500.

Needless to say, I’m not keen on buying a $50,000  parcel of empty land to build my future upon if I have to give Chatham County $3,500 for the privilege of building a house there. Despite being a small business owner (or perhaps because of that), I DO NOT make a large amount of money every month–in fact, I’d say I make the equivalent take-home pay of what someone making $8 an hour would make for the 50+ hours a week I work. Fortunately, I have also gone to some trouble to ensure I live reasonably within my means. I’d love to own instead of rent, but let’s put this into perspective: Chatham County tells me that to start building my dream home here, I have to give them about 9 weeks of my pay just for the “impact fee” privilege, ignoring all other fees such as those required for permits and inspections. The purpose of the impact fee being something that I’ll never see any benefit from is merely an added insult. I don’t want to pay for someone else’s children to go to school, and guess what? I’ll look elsewhere because of the hubris of the fools in charge of Chatham County.

I mean, think about this: if I’m buying land for $50,000 and the county demands $3,500 to “allow me” to build a home, that’s 7% of what I’d have paid for the land! That’s not all there is to it, and I could name off other regressive punishment taxation that chases off development such as the “recreation fee,” but my point is clear.

Chatham County: no one wants to move here because you run things like you’re Chapel Hill, Cary, or Raleigh, but you’re none of these. Chatham is rapidly becoming a “bedroom community” and many businesses are shutting down or moving to neighboring counties that don’t have absurdly brain-dead policies like this. I can’t count how many decent-sized corporations have considered Chatham County, NC as a possible location for some kind of sizable facility that would bring hundreds of jobs to the area, only to be denied something they needed. From what I understand, the old Joan Fabrics building in Siler City (which is now occupied by Acme-McCrary, leaving an empty Acme-McCrary building right across the street) was examined for potential as a distribution center for Sheetz, and that deal fell through because someone in some level of local government didn’t want all that tractor-trailer traffic to be there on US 64. Hello, genius, the building has something like 8-10 loading docks on the side! If you want to fill it, is it really reasonable to expect that those docks will be left mostly unused?

In four years, I have witnessed a slow but steady decline in Chatham County’s economy, and while my business is doing well, it’s more because of adaptation and our ability to engineer workflow and customer experience improvements; the county and city governments largely seem to prefer that businesses shut down and get replaced by trees and pastures. On top of that, fees such as impact and recreation fees that charge a premium for the privilege of developing and growing Chatham County end up reducing overall revenues by strongly encouraging people to build their lives in Burlington, Sanford, and Asheboro instead.

Please, for the love of all that’s sane and logical, get rid of these kinds of fees. They hurt everyone in the entire county, and they’re the biggest reason I’ll never build my permanent home here. I don’t want to live in a bedroom community, and when the lease is up on the current location for my business, I’m going to have to justify remaining in Siler City. A forecast of future economic activity will play very heavily into this choice.

How many people are going through the same thought process about this subject every year? How much opportunity for growth has Chatham flipped the bird towards and lost forever? With the constant growth going on in the county, it will only become more difficult to justify over time.

So, how do I handle repairing a RAID-5 in a server I can’t touch?

Two drives failed in a 5-disk RAID-5 array at a client who dropped our services; fortunately, I’d put in a backup system, so when they brought us back on, I had a full backup from midnight to restore from. Unfortunately, only one drive was on order for various reasons and they needed it back up as soon as possible…now, here’s the million dollar question:

How do you reconstruct a RAID-5 array with 3 good drives remaining, which once was a 5-drive array, ? Consider the following:

  • The original array was 5x 500GB drives, of which only 3 remain
  • The backup data is about 500GB (so just RAID-1 for the data isn’t a choice)
  • The Linux boot/root filesystem sits on a RAID-1 across the first partitions of all five hard drives (resilient against up to all but the last single drive failing)

Since the software is still resilient against three drive failures, concerns over that are pointless. The client doesn’t have time to wait on a spare drive or two, but we want to order up a spare because of possible future capacity needs so we can pull off the next step. What was the best compromise? Simple!

mdadm --create /dev/md1 -l 5 -n 4 /dev/sd[abc]2 missing

This creates a 4-drive RAID-5 from the existing old 5-drive RAID’s partitions that is intentionally degraded for when we add the fourth disk in later. From there, I wrote a new XFS filesystem, mounted it, and restored the data. This is what /proc/mdstat looks like now on that system (note that no reboot was needed for these repairs at all!):

# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdc2[2] sdb2[1] sda2[0]
      1406519040 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/3] [UUU_]

md0 : active raid1 sda1[0] sdb1[1] sda1[4] sdd1[5](F)
      19542976 blocks [5/3] [UU__U]

unused devices: <none>

“Special by default” function keys: a dumb idea

So many PC laptops, particularly those in the cheaper range, are now shipping with “special functions” such as screen brightness adjustment and wireless adapter on/off switching as the default action when you press the F1 through F12 function keys. On what planet was this a good idea? What kind of morons were sitting around at HP and Dell going “gee, no one ever uses F-keys, so let’s make them do something else?”

What’s the keyboard shortcut for closing a program? It’s Alt-F4. This has not changed since the days of Windows 3.1, and is a very commonly used keyboard shortcut with anyone that knows what keyboard shortcuts are at all. Not having to shuffle a mouse to the top-right corner of a box to close it literally saves many seconds of effort, and those seconds add up when multiplied across an entire day’s work. Now, however, Dell’s infinite wisdom has decided that the out-of-the-box configuration requires pressing the “Fn” function modifier key to use any of the F1-F12 keys for the functions they have maintained on their own for the past two decades. (Apparently Microsoft isn’t adding any extra combinations for “Alt-Brightness Down” anytime soon.) So, when  I get on a Dell Inspiron 1545 laptop to perform service work, I hit Alt+F4 to close windows and instead of having the intended behavior, I just accidentally turned down the LCD brightness. Now I’m on the hook to press F5 to bump up the brightness again, then hit Alt+Fn+F4 to do what I originally intended.

Oh, but if you think that’s bad, it gets far far worse! Let’s say I’m downloading a big driver file for a printer or display adapter, because these are always hundreds of megabytes in size, yet 98% of the download is extra crap that isn’t required for printing a document or making a video card show cute rotating boxes. I’m waiting on a 200MB HP printer driver to come down the pipe, and while I wait, I’m performing other tasks. I find a file I need to rename for some reason, so I click the file and hit F2 to bring up the renaming function in Windows Explorer.

Guess what? Some complete and total asshats at Dell assigned F2 to be the magical key that disables the internal wireless adapter. Instead of renaming a file as intended, I just killed my wireless connection and lost the entire download. All that time waiting is lost as well, so I now get the privilege of waiting even longer for something that never should have been aborted in the first place. Just to make matters even worse, F2 is immediately above the number 2. Anyone who needs to type a 2 and overshoots the stroke could easily end up killing off their Internet connection instead. HP isn’t much better; while they usually put the wireless switch control on the F11 key instead of F2, F11 is still above the last keys on the number row and is still easy to accidentally press. Other functions such as internal/external monitor switching are almost as annoying, but tend to self-correct when they notice there’s no monitor to switch to, and so are somewhat more forgivable.

In the BIOS settings for most of these systems, an option exists to restore the function keys to their normal function key behavior, as it should be! The user should never have to change a BIOS settings on a factory released computer just to make the keyboard work properly! My problem is that the default setting from the factory is the one which is in favor of accidentally killing off your Internet connection and messing up your screen brightness. In my extremely not-humble opinion, every manufacturer that does this is stupid. No one should purchase these computers. It’s not worth supporting this level of ignorance about how a computer is used. Combine this kind of foolishness with the “ClickPad” garbage that’s being put into lots of laptops nowadays, particularly in HP laptops, and some of the ridiculous keyboard layouts on cheap Compaqs from the past few years, and you have a recipe for a brain-dead, productivity-hostile pile of crap laptops that I wouldn’t accept for free.

Add one more thing to the growing list of “it’s not a bug, it’s a feature” nonsense I’m so tired of tolerating these days. Grumble, grumble.

those seconds

Service providers that store user data need to GIVE USERS THE KEYS!

Comments on this post are welcome and strongly encouraged.

Service providers such as Gmail, Yahoo, Facebook, Twitter…all of these, they need to offer users a data encryption option that does the following:

  1. It disables the password recovery system, so that no one else can exploit it or any weak links to it to get into our accounts, but if we “forget” our password then we can’t either; and
  2. Our passphrase encrypts a larger key which encrypts our non-public data on their servers with 256-bit AES encryption.

In light of the fact that General Petraeus was brought down by someone other than him and a personally trusted party accessing the data in his Gmail account, I think the users need to be handed the keys to our accounts and service providers need to give them up. By far the largest method for hackers to steal highly important or sensitive data is the “forgot password?” link at any given website.Our email accounts are almost universally used as a skeleton key to our other accounts. Mat Honan’s Gmail, Twitter, and Apple ID accounts were all hacked into in the space of an hour this way, and the hackers deleted all of the data on his MacBook, iPhone, and iPad when they got in.

For services that offer this encryption option, there should be an additional option to unlink all email accounts as well. There are some services that exist already which allow you to open an account for which an email account is optional, but they’re not very common and typically are also obscure and small.

Obviously, this is something that won’t do much with services like Facebook and Twitter, because in order for the service to show tweets or posts to anyone else, they have to be readable by the service provider itself. However, if you’re on Facebook and change a post or picture to be visible to “only me,” the media should be encrypted with your encryption key, then all unencrypted copies deleted from the provider’s servers, including its content delivery network.

Another feature that absolutely needs to be in place is for all mail service providers to support mail delivery and hopping over SSL or TLS, so that plaintext email does not go over the wire without any encryption. If email is encrypted on Gmail and encrypted on Yahoo! Mail, then the end-to-end link between them also needs to have encryption. Ultimately, the amount of time an email spends stored or transmitted as plaintext should be minimized. It would also be nice if mail applications such as Mozilla Thunderbird had built-in encryption for the entire user profile (stored/locally cached mail, stored account passwords, configuration settings, etc.) utilizing a master password, though it seems that most people point to workarounds that don’t ask Mozilla to add such support directly into Thunderbird. (What if I don’t want to install full disk encryption software, or can’t do so, or want to use Thunderbird in a portable fashion on a flash drive?)

Yet another feature that would be very nice to have is a “lockdown” feature, where you can log into your encryption-enabled account on a service like Facebook or Twitter, go to some sort of security settings page, press a button called “lock down account,” confirm that you really meant to lock down the account, and all media that is stored in your account automatically gets changed to “only me” privacy and encrypted in one shot, plus any attached “escrow” methods of password retrieval such as cell phones or email addresses are rendered unusable. If you have reason to believe that your data needs to be locked down quickly, having a feature like this is critica

The biggest downside to this system is that if you lose or forget your password, you lose everything. The most common response to this “downside” will be “that’s a great feature to have!” and I strongly agree: if I don’t want anyone accessing my account, I desperately need to be able to lose the password with no means of recovery. However, another downside is that if someone gains access to your account, they can lock you out of your own data in the same way that you can lock others out. The most obvious answer to this would be some form of two-factor authentication, but adding TFA to the mix would imply such things as if you lose your second factor, you can’t lock down your account or change your encryption password, so it’s a bit of a double-edged sword.

The major reason that “encrypt everything” has not been adopted by knowledgeable users is that it’s not available as an option, and where it is available, you have to jump through ridiculous hoops to get it set up and working. Things like the HTTPS Everywhere extension and Google switching its services to use HTTPS by default are steps in the right direction. The fact that anyone can get online and dig up your maiden name, social security number, city you were born in, first vehicle you owned, and much more within minutes and for small fees means that password recovery options with security questions and whatnot are the equivalent of locking your five deadbolts and leaving the key under the WELCOME mat. Furthermore, if the FBI, CIA, NSA, or some other three-letter agency decides they want to read your mail without your knowledge, there’s nothing at all stopping them from doing so.

One of the big arguments against encryption is that it allows bad people to hide bad things. News flash: bad people can use encryption even if you DON’T allow it. The only thing that happens when you don’t have encryption available is that GOOD people can’t protect themselves and their privacy so easily, but the bad guys have an extraordinary motivation to jump through the extra hoops required and certainly will do so to avoid being caught. This argument against providing encryption has no substance in a practical world.

In summary: Service providers need to give us the keys to our data.

P2P file sharing considerations and vulnerability list

This post isn’t really intended for my readers, but I’m making it public so that anyone interested in the topic of P2P network security issues can learn more about them, or chime in with anything I have overlooked or missed. If you read my post where I announced my fourth-generation peer-to-peer file sharing project, you’ll understand the purpose of this post better. What follows is a link index of research material for me to look back upon while I work out the details of the project, as well as bullet-point design considerations that don’t need to find their way off my radar. This list will be perpetually incomplete. Comment with any additional links or considerations  you may have.

  1. Darknets and hidden servers: Identifying the true IP/network identity of I2P service hosts
  2. One cell is enough to break Tor’s anonymity
  3. Sybil attack
  4. End of the Road for Overpeer (how Overpeer introduced corruption into the FastTrack network)
  5. Kademlia article on Wikipedia
  6. Actions by copyright holders to curtail BitTorrent usage
  7. Peer-to-peer advantages and weaknesses
  8. Needs packet TTL to avoid loops, but having TTL opens up hop counting/distance prediction vulnerabilities.
  9. Nodes self-assigning unique identifiers can have benefits, but also opens possible  tracking/confirmation vulnerabilities.
  10. May need to come up with some method of confusing traffic confirmation attacks, but might be too expensive.

Completely disable Firefox disk caching and thumbnail generation for speed and paranoia

A comment on an article on Ars Technica reminded me that people have been convicted of possession of child pornography in the past based solely on the contents of their web browser’s cache (Internet Explorer calls them “temporary internet files.”) The problem with this is that these days, you don’t necessarily have to see or click on anything to have it load into your browser cache. Ignoring questionable ads and unexpected pop-ups and someone else touching your computer as a source of such garbage, actual “features” like link prefetching can do this by loading the contents of certain links on a page in anticipation of you clicking through them while never necessarily doing so. It’s pretty scary to think about such things, but they can and do happen, and if some forensic guy ever sees the contents of your hard drive, you don’t want to have to worry about some prefetched stuff you didn’t know was there landing you in hot water, especially in the “guilty until proven innocent” manner criminal court juries tend to operate.

Torrents, private emails, and other things that aren’t necessarily illegal at all (yet definitely deserve to be kept private) are stored in your browser cache, too. Even if you’re not concerned about the remnants of the virus you just got quarantined having opened questionable websites for you, you might not want copies of your email to your boss with whom you’re having an affair being found by your nosy significant other, or you might have caught your kids downloading something they shouldn’t have using BitTorrent and want to make sure records of their faux pas isn’t floating about in the browser cache for the next few months.

Then there’s the technical aspect: more files on disk is generally a bad thing, because a folder with 5,000 entries is far slower to search through for one file than a folder with 100 entries (or no folders at all). Wouldn’t it be awesome to alleviate both the paranoid legal risk as well as speed up your browser and prevent it from polluting your hard drive with thousands of files you don’t care about? If you use Mozilla Firefox, it’s actually somewhat simple to turn off prefetching and disk caching once you know how. Note that memory caching is still in place, so you do still have the speed benefits of caching; note also that memory caching can still end up in your paging file, so this isn’t a 100% foolproof thing, but in terms of eliminating risk it’s a huge leap forward.

  1. Open Firefox. Go to the address bar, type about:config and hit [Enter].
  2. It might warn you not to play around. Click “I’ll be careful, I promise!”
  3. Type “prefetch” into the search box. You should see an option called “network.prefetch-next” which you can double-click to change to “false.”
  4. Search for “cache.disk” this time. Change “browser.cache.disk.enable” to “false” and change “browser.cache.disk.capacity” to “0.”
  5. Close and re-open Firefox.
  6. Hit [Ctrl] + [Shift] + [Delete] to bring up the “Clear Recent History” box. Change your time range to “Everything” and make sure “Cache” is checked. This erases the entire disk cache.
  7. For the really paranoid, install CCleaner (don’t install anything else it offers to install while you do it), find the “Wipe Free Space” option at the bottom of the left column, right-click on it, and choose “clean.” (It might warn you that it’s going to delete stuff, but proceed anyway.) This erases the contents of all of the empty space on the hard drive, including anything that was in the disk cache you just deleted and anything that has ever been deleted from the computer.
  8. [Update for newer Firefox versions] Firefox stores thumbnails of pages you visit for the new “New Tab” page previews. To get rid of this while you’re in about:config, right-click somewhere and go to New -> Boolean, call it browser.pagethumbnails.capturing_disabled and set it to true. Restart Firefox and no more behind-your-back thumbnails.

While you’re at it, you might want to install NoScript and Adblock Plus, and learn how to use them to protect against these things landing on your browser in the first place, but that’s beyond the scope of this post. Happy faster browsing, and tell your boss in your next email that I’ll see her this weekend. 😉 xoxo

Fourth generation peer-to-peer file sharing: my next project

Final Update

I have canceled the copyright-infringement-notice.com domain name and archived the text elsewhere on this blog. All of this content was written in 2012 and hasn’t been updated in years. I am keeping the post you’re currently reading for historical and entertainment purposes. If you follow any outdated advice or information given below, you do so entirely at your own risk. I am not a lawyer and only a fool would take anything I write as legal advice.


(WARNING: I make no promises here; my P2P software is vaporware until I get the details worked out. I don’t want anyone thinking there’s something coming until there actually IS something coming.)

First Generation: In the beginning, there was Napster. Napster was the first user-friendly MP3 sharing program. Sure, songs and media were shared via IRC and FTP sites before Napster, but Napster made it extremely simple and easy to share music with other people. The biggest problem with Napster was that the Napster servers ran everything: they maintained a master index of files and a list of users sharing those files, and connected users together to perform the actual transfer. When record labels got angry, they could easily point to Napster’s centralized catalog and say “there’s no reason you can’t block our songs from being downloaded, because you control the entire process!”

Second Generation: Ahh, yes…Morpheus, Grokster, LimeWire, and the infamous Kazaa. These networks dropped the central index by running searches directly from one computer to multiple other computers. In theory, this removed centralization and made it difficult to shut down the networks. Unfortunately, there was still centralization involved: someone had to tell the computers what other computers were on the network in the first place. The indexing of files was gone, but the network still largely relied on a parent company’s servers to operate. Some of this stuff is still around today with alternative servers being used, but they’re mostly defunct due to the third generation. Well, that and the fact that at least some of these networks had gaping security holes that were easily exploited to render them useless. It was easy as pie to flood the FastTrack network that powered Kazaa and Morpheus with corrupt data.

Third Generation:  Simply put, BitTorrent and eMule. These systems are hybrids; they operate both from servers (in BitTorrent they’re called trackers) as well as with a fully decentralized second network known as DHT (distributed hash tables, NOT dihydrotestosterone, for you chemistry nuts.) Multiple servers are available and there is much less centralization involved, plus DHT doesn’t go through “servers” at all: computers find each other through other computers, in what is known as the DHT “overlay network.” BitTorrent trackers exist which are completely open and that may be freely tacked onto existing torrents to prevent one tracker’s failure from killing the torrent.

However, one thing hasn’t changed since Napster: computers still communicate with each other directly, immediately revealing the IP address of the uploader and downloader to each other. Furthermore, the way that these networks’ servers operate means that hostile parties such as the RIAA, MPAA, porn production companies, etc. can simply connect to a server, request a list of peers for a supposedly infringing file of interest, and the server hands them a big batch of IP addresses that have that file. Even if the servers didn’t make it so easy, it’s trivial to extend a little more effort and scan the DHT networks for peers with that file, so elimination of the servers wouldn’t fix the issue. This is how content owners gather lists of IP addresses to threaten and sometimes drag into court.

Generation 3.5: MUTE file sharing. The reason I’ve labeled this as “generation 3.5” is because it didn’t quite catch enough momentum to grow, and because it still suffers from many security issues that have plagued P2P sharing since the beginning. My solution to the IP address revelation problem is more complicated than MUTE’s, but the essential idea is the same: pass data to peers who then pass them along to their peers, with the origination IP address not included. MUTE had the breakthrough idea for largely killing the IP address problem, but it seems that all effort went into the design of the routing scheme and algorithm, while tackling other logistical flaws was put on the back burner.

The most serious of these are the various forms of poisoning: index poisoning, where bogus index results come back, sometimes in huge enough quantities to make locating the intended data extremely difficult and frustrating; and file poisoning, where the “bogus” index results return real files that do not have the content expected. In the days of the FastTrack network, this became very common, with the worst example being MP3 files containing the first 20 seconds of a song looped repeatedly and cut off at the same track length as the original song, meaning that a cursory listen to the beginning of the MP3 to verify its content would “pass the test” while the MP3 would not actually be what was desired.

More Gen3-esque Software: Perfect Dark and Freenet. These programs have routing constructs similar to MUTE, and combine encrypted caches on the hard drives of users of the network as their “storage.” The only way to retrieve a file is to request it by its “key.” These networks add deniability to the storage of the data, since there’s no way for the user to know what’s in the encrypted data store. Unfortunately, these programs also suffer some issues; Freenet is designed to work like the Web rather than to share large files, and tends to be fairly slow and/or unreliable for that purpose (unpopular content in particular will slow down and eventually just vanish). Perfect Dark uses DHT, so it is no more secure for uploaders and downloaders than any other DHT implementation. Some users of Perfect Dark have been arrested in Japan for uploading popular television series, proving that anonymity is not protected by Perfect Dark in any meaningful way.

The next generation of file sharing programs has to fix the IP address issue completely, while also combating other major security problems (like poisoning, denial-of-service attacks) that have gone insufficiently addressed in previous peer-to-peer file sharing programs.

Don’t get too excited, but here’s where I am going with this: I am hesitant to announce vaporware, but given the amount of interest in my posts regarding copyright infringement notices and my own casual interest in the chilling effects of copyright trolling on free exchange of information and ideas, I have been working out the details of a fourth generation file sharing protocol that solves almost all of the issues surrounding file sharing’s general lack of anonymity and ease of censorship through lawsuits and settlement demands/threats.

I thought about how to fix the problems with torrents and DHT systems such as Kademlia. The solutions that came to mind seemed obvious, the practical applications that I began to come up with were full of glaring holes. When I solved the problem of tracking down an uploader or downloader by IP address, which is the obvious problem with all current systems, as the lawsuits and settlement demands clearly show, I thought I was a genius and wondered why no one else came up with the same solution…until I found programs like MUTE which work in a similar fashion. I thought about the problem in more depth, and realized that my perfect little system for losing the traceability of the IP addresses was merely the tip of the iceberg. DoS attacks, index and file poisoning, hash collisions, plausible deniability, man-in-the-middle attacks, and “Sybil attacks” are just a portion of the problems that have to be solved, and I think I’ve answered most (if not all) of these issues.

At some point, I’ll need help testing and implementing this, taking it cross-platform, and getting the word out about it once it’s confirmed to work as expected and stress tested in the real world. For now, I’m writing this to let my readers and the Internet at large know that the problem is being worked on. I look forward to the day that copyright trolls are, in a technical sense, neutered.

Here’s to my ideal P2P file sharing vaporware. When it’s more than an idea on paper, I’ll make a new post and link to it here. Stay tuned, everyone; this will be interesting.

Toshiba keyboard and touchpad both not working or malfunctioning? The solution may surprise you.

THE  PROBLEM: A Toshiba Satellite L305-S5933 laptop came into the shop recently with a non-functioning internal keyboard and touchpad. The keyboard worked fine in the BIOS and prior to booting an operating system, but in Windows neither device was functional at all, and in the Tritech Service System (a custom Linux distribution we use at Tritech Computer Solutions for checking out computers) keystrokes would be severely delayed or would miss completely. Either way, the keyboard clearly worked okay before an OS fully booted, and stopped functioning when an OS was running. USB input devices work fine.

THE SALT IN THE WOUND: There are posts all over the place mentioning problems similar to this, with theories about Windows updates and BIOS updates and drivers all over the place, but none of them are helpful and none of the posts were solved or had any kind of follow-up. In short, no one seems to have any solid lead on fixing this issue.

THE SOLUTION: In the case of the Toshiba we inspected, the touchpad was bad. The failed touchpad also kept the keyboard from being able to operate while in an operating system. To confirm this, we removed the keyboard and disconnected the touchpad, which immediately caused the keyboard to start operating correctly.

THE TECHNICAL EXPLANATION: On a laptop, the keyboard and mouse (touchpad) are what’s known as PS/2 devices. Since the days of the IBM PS/2 computer, a dedicated chip called the 8042 keyboard controller has existed in PCs, powering two special serial ports, one for the keyboard and one known as an AUX port which is always used for a mouse. Though the 8042 chip is no longer a standalone component, identically functioning circuits are in practically every PC laptop and desktop computer that exists. What does all this have to do with the touchpad knocking out the keyboard? It’s actually quite simple: the 8042 controls both devices, and the defective touchpad was flooding the 8042 chip with garbage data. If one channel is flooding the controller chip with data, the other channel is “starved” of bandwidth and can’t send its information through the 8042 chip. Think of it like someone yelling words rapidly into your left ear while you were also trying to listen to someone talking normally in the right ear. You can’t possibly follow both conversations because one is drowning out the other. That’s how your toasted touchpad can cause your keyboard to not function at all.

HOW WE FIGURED IT OUT: The key knowledge here is that the two PS/2 devices are attached to the same controller chip. Bringing up the “top”  command in the Tritech Service System shows us the CPU usage of running processes in decreasing order of CPU usage by default. We noticed that two of the “kworker” threads were eating 1.5% to 1.8% of the CPU at all times. (A kworker thread is a “helper” program that runs directly from the Linux kernel to help it perform various tasks, not as an actual program.) The next logical step after noticing this unusual behavior from a clean system that has worked very well on every Toshiba Satellite L300-series laptop prior to this one was to unplug the keyboard and touchpad, and see if anything changed (this requires minor disassembly of the keyboard area of the laptop to perform.)

Unplugging the keyboard ribbon cable had zero effect. However, sliding out the ribbon for the laptop touchpad caused the kworker threads to completely cease using CPU. Connecting the keyboard back up and attempting to use it confirmed that removal of the mouse/touchpad from the equation brought back full functionality in the keyboard. Diagnosis: bad touchpad.

One of the reasons that we advocate for aspiring technicians to seek general knowledge about how computers work instead of specific situational solutions like an A+ certification test would target is for situations like this one. The knowledge that the keyboard and mouse run through the same controller chip was the only thing that prevented an average technician from knowing where to troubleshoot further and solve the problem, and the diagnosis could just as easily have been performed in Windows as in Linux.

It’s important to understand as much as you can about the general workings of a computer; the standard PS/2 keyboard/mouse controller chip has been around for a very long time, and is easily ignored by an aspiring technician in an era where many new computers only use USB connectivity and have thrown PS/2 hardware out the window. Don’t ignore something just because it’s slightly obscure or because it’s an old carry-over from the computing days of old! You never know when that obscure knowledge will turn out to be a missing puzzle piece to a confounding and frustrating issue that you’ll waste many hours poking and prodding at.