Wednesday, June 18, 2008

Axon, axoff

riend of mine poses an interesting question on his blog:




n most of the equipment I work with, the sensors (RTD’s - heat sensors that tell the controller the temperature of something, photocells – proximity sensors that sense when the product, ie., a box, is passing a certain point on a conveyor, limit switches – simple mechanical open/close switches that would tell the controller a safety door was open or a button was being pushed) each typically communicate with the controller through it’s own pair of wires that go from the sensor to an input/output board in the controller. You can end up with a lot of wires.


Now there are systems that use control networks and “smart sensors” that not only send the information for which it exists (“115 ohms meaning 350 degrees F”, “I sense a box!”, or “button pushed”) but also a packet of info identifying the sensor. This way, you can run all this information through a single cable instead of hundreds of wire pairs.


My question is… in which way does the human body work? The first (dumb open/close switches) or the second (smart sensors)?




His guess that it’s the second option (his initial guess at least). My guess was the first. Wikipedia talks of nerve cells but I can’t seem to find a smoking gun. Opinions?

"May I ask you, inquired the elevator in its sweetest, most reasonable voice, if you've considered all the possibilities that down might offer you?"

levators are fascinating things. I say that despite being trapped in one for forty-five minutes last week (no lie — myself and five other strangers found ourselves stuck 14 floors up, making nervous jokes during a rush hour malfunction).


What I often ponder is the ideal algorithm for running one, or more critically, a set of elevators. Poking around I’ve found that my initial hunch was correct — there’s what’s referred to as the elevator algorithm that dictates that an elevator will move in its current direction, stopping only to let people on or off, until it has no more calls in that direction.



At that point it can either sit there and wait (which is probably more energy and cost efficient) or try to go to a more useful floor (the lobby when people are expected to arrive, or the top floor when they’re expected to leave).


Some other interesting factoids:



  • The elevator algorithm is also used for hard disk access, to optimize the motion of the arm when dealing with read/write requests.
  • In areas where there are a lot of Jews, you will often find Sabbath elevators, which operate in accordance with some Orthodox and conservative rabbinic prohibitions. Wild!
  • Some modern elevators (including, apparently, the one in the Adelaide office of my company) require users to select their desired floor from a central console. They are then told which numbered elevator to get on. Inside the elevator, there are no buttons to push. This is apparently much more efficient but has some human-factors drawbacks:

    • The console doesn’t recognize when a group of people is too large to fit in a single elevator.

    • A single person requesting an elevator multiple times might end up with multiple elevators dispatched to retrieve her/him.

    • People not knowing the system often get on an elevator and end up being taken for a ride!



What other heuristics could you use if you had to program a set of elevators?


Another thing to ponder: how could you determine the finer points of the algorithm used in a given office building, just by calling and riding the elevators? It would seem to require an accomplice at the very least.

Unconventional solutions to computer miscreants

ecades ago, the writers of an early “shared” operating system known as the Incompatible Timesharing System or ITS got so fed up with people deliberately trying to find ways to crash the system that they came up with a novel solution — a KILL SYSTEM command that anyone could run that would crash the system (presumably to take all the fun and challenge out of it). I love that. While it’s hard to imagine such a feature being implemented in a modern operating system, I believe the spirit of the idea might still be usable in other contexts.



Pretty much every single online gaming website I’ve seen has a problem with people running cheats — computer programs that stand in for the human and respond with uncanny precision or speed. I know FPS games have a large problem with cheaters who can fire with with deadly aim (among other tricks), but my own experience is with more basic games. I used to spend a lot of time on online versions of the word game Boggle, including PlaySite’s Tangleword (which is now IWin’s Boggle) and Yahoo’s Word Racer. Cheaters were a rampant, recurring, and frustrating problem. People would write programs that generated all the words for a given board from a dictionary, and would achieve phenomenal scores as a result, much to the chagrin of everyone trying to win on brainpower alone. Writing such a program is not difficult (I know, because I wrote one; I never used it to win, except against other cheaters), but these people were difficult if not impossible to discourage. Might the solution be to allow anyone a “super-user” account that always wins? I’d love to see the experiment done.



I recently worked on a major redesign of a website for a major maker of tourist guidebooks. They had just fully embraced the idea of letting users supply content across many different areas of the site, but in every case we had to seriously consider the possibility of malicious uploads, largely because of one pathological individual who had been carrying out a vendetta against the company for the last decade. On every public forum on their old site he took every opportunity to add comments that were embarrassing, confusing, malicious, or disgusting. He would create new accounts as soon as his old ones were banned, often several times in the same day. (The solution we implemented for the site redesign was that uploads everywhere had to be approved of by a moderator.)



I got to wondering, what if instead of banning such an individual, his account got tagged in such a way that he could still view his postings, but no one else could? Presumably he might never know that his account was so tagged, and would continue to waste his energies devising his malicious missives when in fact his words would be reaching nobody. It would be activated when either he is logged in, or a cookie is set on his machine (presuming he doesn’t disallow them; most forums require you to allow). I’m sure the most persistent people would eventually catch on, and resort to logging out and removing the cookie, or checking from another machine to see if their posts were actually getting through, but at the very least it would increase the burden on these lowlifes.



This idea might work with the problem of cheaters on some gaming sites too. If their account gets tagged, they can still “log in”, but no one would see their scores but them. I’ve found that such cheaters actually thrive on the outraged comments they generate, but I could never convince other players to just ignore them in the comment areas. When such a “secretly blacklisted” tag is set by a moderator, no one would see the cheater’s comments either, though he would see theirs. To the cheater, it would just seem like they were being ignored. It would be trickier for FPS-type games; when the cheater kills someone, the server would have to pretend (to the cheater’s client machine) that the person’s avatar had died, and send no more updates as to that person’s whereabouts. Some situations wouldn’t be fakable but I bet you could fool a lot of the cheaters for much of the time. It would be the ultimate pwn.



Could spam be dealt with similarly? I’ve always wished there was as option in email programs that allowed you to respond as if your account didn’t exist. That is, send the exact response to the sender that the mail server on your host would send if there was no such login on that host. I’m not fully sure if this could work, though; is the check for whether an account exists done during the initial handshake between sender and receiver, or at some later point?