Programming has been a passion of mine since I was very young. I taught myself to write programs from the manual that came with my home computer and have never stopped since then. Most of what I do today is either written as shell scripts or C code. Here’s a brief overview of the software I have to offer.
A file duplicate scanning tool for Linux, Windows, Mac OS X, and most UNIX-like platforms that offers a wide variety of actions to take on duplicate file sets and filters to control how those duplicate sets are found and arranged. Forked from fdupes and receiving a lengthy blog post about its growth during development, jdupes is one of the quickest and most versatile duplicate file finders in existence.
This one always turns heads and cranks necks when I explain what it does: it’s a FUSE filesystem that allows the use of Windows registry files as if they are an ordinary filesystem full of text and binary files. With winregfs, scripts that need to work on Windows registry hive files become effortless. Other tools exist to work with Windows registries under Linux such as reged (part of chntpw which winregfs is actually based on) but none of them are easy to incorporate into a shell script. I use winregfs to extract the Windows version and edition, set driver services from System startup to Boot startup (echo 0 > HKLM\ControlSet001\services\[service]\Start.dw), copy user settings from one hive to another, and much more. It is still buggy and certain types of data will cause it to crash, but for the vast majority of registry data I have to work with it is generally stable and does what it was designed to do very well.
This was my first practical C program and it’s both extremely simple and quite odd in its purpose. It hasn’t been improved much since I originally wrote it but it fills an obscure niche I couldn’t find any different solution for. The traditional way of running 64- and 32-bit programs on the same Linux system is to have separate ‘lib’ directories for each and a multilib-enabled toolchain to build the software with. I experienced huge amounts of frustration and failure trying to build a multilib toolchain by myself and I decided that I would take a different approach. I would unpack a native system of a different bitness than the current one into a directory (e.g. /chroot32) and use a series of bind mounts (/etc, /dev, /proc, /sys, /home) followed by the ‘chroot’ command to switch into the other system. Unfortunately, the issue I experienced is that I downloaded the 32-bit version of Firefox but could not run it directly because it required a chroot to the alternative system before it could be executed, so I’d have to open a terminal and type a bunch of junk manually. The simplest solution was to write a C program that would “wrap” the chroot action and the execution transparently so that if the 64-bit environment ran /usr/bin/firefox (which would be a symlink to /bin/alterego) it would automatically change the root filesystem and then run the intended binary on the user’s behalf. I have not used it in a long time and am not motivated to add improvements but if it finds a user base I’d be happy to resume work on it.
The ELKS Project (Embeddable Linux Kernel Subset, also called Linux-8086, somewhat incorrectly) is an attempt at making a tiny Linux-like system for computers based on an Intel 8086-compatible processor such as the IBM PC/XT. I had a casual interest in the project since the late 1990s but was not able to contribute in any meaningful way. Development on ELKS came to a complete halt in November 2006 as the last developer stopped contributing. Fortunately there was one maintainer left “awake” and when I put out an offer to take over as the maintainer of ELKS, I was immediately handed the keys to the kingdom! I was still an amateur in C programming at the time but I could build, test, patch minor issues, package, and write scripts, so I slowly hacked away at the code base until I released something truly game-changing: ELKS 0.2.0, featuring a build script that made building and testing so easy that an amateur could do it. ELKS continues to improve through the contributions of a very small but dedicated and brilliant group of developers and I’m proud to be the person steering the project.
It’s easier to delete malicious software infections on Windows machines from a live Linux CD than from within the infected Windows installation. That’s why I originally created the Tritech Service System: my live CD distribution of choice went without updates for several years and became useless, so I had to find another one or make my own. The history of TSS is complex but the end result is an extremely compact yet modern live Linux CD, USB, or network boot system that runs entirely in RAM and doesn’t require access to the original boot device to be useful. TSS consists of two parts: a publicly available release which boots to a text-mode prompt and can be used to mount and work with offline Windows systems, and a set of proprietary packages which I keep to myself that take TSS from a general-purpose live Linux to an advanced diagnostic and repair tool that integrates with my work order management system at my computer service business: Tritech Computer Solutions. If you are fluent with working on a Linux command line and you don’t mind not having manual pages, you’ll probably be comfortable playing with TSS.
f you run software on a remote Linux machine across a LAN and you need to be able to play sounds and notifications on your local machine, this set of scripts and free software executables will let you do it. You can use a local command on any Linux machine attached to your LAN to trigger sound events on any listening Windows or Linux machine. The software does basic decision-making and scanning if the literal sound name is not accessible to try to improve the chances of a notification successfully playing. Broadcast and listen modes are included in the Linux/UNIX script “snd2remote” and a batch file (with required supporting executables) is also provided for Windows systems to receive events
jodyhash has been abandoned as a separate project. Faster algorithms with better distribution properties already exist. I wrote it both to scratch an itch and as a learning tool, and while it doesn’t seem to result in collisions in my own software, numerous other people have put it through synthetic tests that find poor avalanche and distribution properties as well as lower performance than hashes with better distribution properties such as xxHash64 and t1ha. Part of being a good programmer is being able to handle when a better solution exists and not fall into “not invented here” syndrome.