Unconventional solutions to computer miscreants
ecades ago, the writers of an early “shared” operating system known as the Incompatible Timesharing System or ITS got so fed up with people deliberately trying to find ways to crash the system that they came up with a novel solution — a KILL SYSTEM command that anyone could run that would crash the system (presumably to take all the fun and challenge out of it). I love that. While it’s hard to imagine such a feature being implemented in a modern operating system, I believe the spirit of the idea might still be usable in other contexts.
Pretty much every single online gaming website I’ve seen has a problem with people running cheats — computer programs that stand in for the human and respond with uncanny precision or speed. I know FPS games have a large problem with cheaters who can fire with with deadly aim (among other tricks), but my own experience is with more basic games. I used to spend a lot of time on online versions of the word game Boggle, including PlaySite’s Tangleword (which is now IWin’s Boggle) and Yahoo’s Word Racer. Cheaters were a rampant, recurring, and frustrating problem. People would write programs that generated all the words for a given board from a dictionary, and would achieve phenomenal scores as a result, much to the chagrin of everyone trying to win on brainpower alone. Writing such a program is not difficult (I know, because I wrote one; I never used it to win, except against other cheaters), but these people were difficult if not impossible to discourage. Might the solution be to allow anyone a “super-user” account that always wins? I’d love to see the experiment done.
I recently worked on a major redesign of a website for a major maker of tourist guidebooks. They had just fully embraced the idea of letting users supply content across many different areas of the site, but in every case we had to seriously consider the possibility of malicious uploads, largely because of one pathological individual who had been carrying out a vendetta against the company for the last decade. On every public forum on their old site he took every opportunity to add comments that were embarrassing, confusing, malicious, or disgusting. He would create new accounts as soon as his old ones were banned, often several times in the same day. (The solution we implemented for the site redesign was that uploads everywhere had to be approved of by a moderator.)
I got to wondering, what if instead of banning such an individual, his account got tagged in such a way that he could still view his postings, but no one else could? Presumably he might never know that his account was so tagged, and would continue to waste his energies devising his malicious missives when in fact his words would be reaching nobody. It would be activated when either he is logged in, or a cookie is set on his machine (presuming he doesn’t disallow them; most forums require you to allow). I’m sure the most persistent people would eventually catch on, and resort to logging out and removing the cookie, or checking from another machine to see if their posts were actually getting through, but at the very least it would increase the burden on these lowlifes.
This idea might work with the problem of cheaters on some gaming sites too. If their account gets tagged, they can still “log in”, but no one would see their scores but them. I’ve found that such cheaters actually thrive on the outraged comments they generate, but I could never convince other players to just ignore them in the comment areas. When such a “secretly blacklisted” tag is set by a moderator, no one would see the cheater’s comments either, though he would see theirs. To the cheater, it would just seem like they were being ignored. It would be trickier for FPS-type games; when the cheater kills someone, the server would have to pretend (to the cheater’s client machine) that the person’s avatar had died, and send no more updates as to that person’s whereabouts. Some situations wouldn’t be fakable but I bet you could fool a lot of the cheaters for much of the time. It would be the ultimate pwn.
Could spam be dealt with similarly? I’ve always wished there was as option in email programs that allowed you to respond as if your account didn’t exist. That is, send the exact response to the sender that the mail server on your host would send if there was no such login on that host. I’m not fully sure if this could work, though; is the check for whether an account exists done during the initial handshake between sender and receiver, or at some later point?