Wednesday, June 18, 2008

Axon, axoff

riend of mine poses an interesting question on his blog:




n most of the equipment I work with, the sensors (RTD’s - heat sensors that tell the controller the temperature of something, photocells – proximity sensors that sense when the product, ie., a box, is passing a certain point on a conveyor, limit switches – simple mechanical open/close switches that would tell the controller a safety door was open or a button was being pushed) each typically communicate with the controller through it’s own pair of wires that go from the sensor to an input/output board in the controller. You can end up with a lot of wires.


Now there are systems that use control networks and “smart sensors” that not only send the information for which it exists (“115 ohms meaning 350 degrees F”, “I sense a box!”, or “button pushed”) but also a packet of info identifying the sensor. This way, you can run all this information through a single cable instead of hundreds of wire pairs.


My question is… in which way does the human body work? The first (dumb open/close switches) or the second (smart sensors)?




His guess that it’s the second option (his initial guess at least). My guess was the first. Wikipedia talks of nerve cells but I can’t seem to find a smoking gun. Opinions?

"May I ask you, inquired the elevator in its sweetest, most reasonable voice, if you've considered all the possibilities that down might offer you?"

levators are fascinating things. I say that despite being trapped in one for forty-five minutes last week (no lie — myself and five other strangers found ourselves stuck 14 floors up, making nervous jokes during a rush hour malfunction).


What I often ponder is the ideal algorithm for running one, or more critically, a set of elevators. Poking around I’ve found that my initial hunch was correct — there’s what’s referred to as the elevator algorithm that dictates that an elevator will move in its current direction, stopping only to let people on or off, until it has no more calls in that direction.



At that point it can either sit there and wait (which is probably more energy and cost efficient) or try to go to a more useful floor (the lobby when people are expected to arrive, or the top floor when they’re expected to leave).


Some other interesting factoids:



  • The elevator algorithm is also used for hard disk access, to optimize the motion of the arm when dealing with read/write requests.
  • In areas where there are a lot of Jews, you will often find Sabbath elevators, which operate in accordance with some Orthodox and conservative rabbinic prohibitions. Wild!
  • Some modern elevators (including, apparently, the one in the Adelaide office of my company) require users to select their desired floor from a central console. They are then told which numbered elevator to get on. Inside the elevator, there are no buttons to push. This is apparently much more efficient but has some human-factors drawbacks:

    • The console doesn’t recognize when a group of people is too large to fit in a single elevator.

    • A single person requesting an elevator multiple times might end up with multiple elevators dispatched to retrieve her/him.

    • People not knowing the system often get on an elevator and end up being taken for a ride!



What other heuristics could you use if you had to program a set of elevators?


Another thing to ponder: how could you determine the finer points of the algorithm used in a given office building, just by calling and riding the elevators? It would seem to require an accomplice at the very least.

Unconventional solutions to computer miscreants

ecades ago, the writers of an early “shared” operating system known as the Incompatible Timesharing System or ITS got so fed up with people deliberately trying to find ways to crash the system that they came up with a novel solution — a KILL SYSTEM command that anyone could run that would crash the system (presumably to take all the fun and challenge out of it). I love that. While it’s hard to imagine such a feature being implemented in a modern operating system, I believe the spirit of the idea might still be usable in other contexts.



Pretty much every single online gaming website I’ve seen has a problem with people running cheats — computer programs that stand in for the human and respond with uncanny precision or speed. I know FPS games have a large problem with cheaters who can fire with with deadly aim (among other tricks), but my own experience is with more basic games. I used to spend a lot of time on online versions of the word game Boggle, including PlaySite’s Tangleword (which is now IWin’s Boggle) and Yahoo’s Word Racer. Cheaters were a rampant, recurring, and frustrating problem. People would write programs that generated all the words for a given board from a dictionary, and would achieve phenomenal scores as a result, much to the chagrin of everyone trying to win on brainpower alone. Writing such a program is not difficult (I know, because I wrote one; I never used it to win, except against other cheaters), but these people were difficult if not impossible to discourage. Might the solution be to allow anyone a “super-user” account that always wins? I’d love to see the experiment done.



I recently worked on a major redesign of a website for a major maker of tourist guidebooks. They had just fully embraced the idea of letting users supply content across many different areas of the site, but in every case we had to seriously consider the possibility of malicious uploads, largely because of one pathological individual who had been carrying out a vendetta against the company for the last decade. On every public forum on their old site he took every opportunity to add comments that were embarrassing, confusing, malicious, or disgusting. He would create new accounts as soon as his old ones were banned, often several times in the same day. (The solution we implemented for the site redesign was that uploads everywhere had to be approved of by a moderator.)



I got to wondering, what if instead of banning such an individual, his account got tagged in such a way that he could still view his postings, but no one else could? Presumably he might never know that his account was so tagged, and would continue to waste his energies devising his malicious missives when in fact his words would be reaching nobody. It would be activated when either he is logged in, or a cookie is set on his machine (presuming he doesn’t disallow them; most forums require you to allow). I’m sure the most persistent people would eventually catch on, and resort to logging out and removing the cookie, or checking from another machine to see if their posts were actually getting through, but at the very least it would increase the burden on these lowlifes.



This idea might work with the problem of cheaters on some gaming sites too. If their account gets tagged, they can still “log in”, but no one would see their scores but them. I’ve found that such cheaters actually thrive on the outraged comments they generate, but I could never convince other players to just ignore them in the comment areas. When such a “secretly blacklisted” tag is set by a moderator, no one would see the cheater’s comments either, though he would see theirs. To the cheater, it would just seem like they were being ignored. It would be trickier for FPS-type games; when the cheater kills someone, the server would have to pretend (to the cheater’s client machine) that the person’s avatar had died, and send no more updates as to that person’s whereabouts. Some situations wouldn’t be fakable but I bet you could fool a lot of the cheaters for much of the time. It would be the ultimate pwn.



Could spam be dealt with similarly? I’ve always wished there was as option in email programs that allowed you to respond as if your account didn’t exist. That is, send the exact response to the sender that the mail server on your host would send if there was no such login on that host. I’m not fully sure if this could work, though; is the check for whether an account exists done during the initial handshake between sender and receiver, or at some later point?

Sunday, January 27, 2008

Really big numbers. And I mean, REALLY big.

onsider a favorite song of mine, Built To Spill’s “Randy Describe Eternity”:








Built to SpillRandy Described Eternity



The song asks you to imagine a very large number by means of an analogy. The full lyrics go like this:



every thousand years

this metal sphere

ten times the size of Jupiter

floats just a few yards past the earth

you climb on your roof

and take a swipe at it

with a single feather

hit it once every thousand years

`til you’ve worn it down

to the size of a pea

yeah I’d say that’s a long time

but it’s only half a blink

in the place you’re gonna be

where you gonna be

where will you spend eternity

I’m gonna be perfect from now on

I’m gonna be perfect starting now

stop making that sound

stop making that sound

I will say I forgot

but it was only yesterday

and it’s all you had to say


Despite the technical difficulties of having a metal sphere that large pass that close to the Earth, it’s a cool metaphor. So how long of a time are we talking about here? Let’s come up with an estimate.


First consider how much a swipe of a feather would remove from the sphere. Let’s take a guess and say that 100 swipes would remove a square millimeter. (That’s being conservative, I think; could you really complete dissolve a grain of sand by brushing it a hundred times with a feather?)


Now consider the size of the sphere, “ten times the size of Jupiter”. Let’s assume they mean ten times the volume, which would be 10 × 1.43128 × 1015 km³, or 1.43128×1016 km³, which in cubic millimeters is 1.43128 × 1034 mm³. The “size of a pea” amount left over is incidental.



So if we just multiple that figure by 100 (feather swipes) times 1000 years, and we get 1.43128 × 1039 years. Let’s call this Randy’s number. Is it older than the current age of the universe? Yes, by a long shot. The universe is only about 1.37 × 1010 years; Randy’s number is 100 000 000 000 000 000 000 000 000 000 times larger. Yeah, I’d say that’s a long time.



Clifford Pickover’s Mazes For The Mind mentions a few other large (estimated) numbers:



  • The talking number — the total number of words spoken by humans since the dawn of history: 1016

  • The Coney Island number — the total number of grains of sand on the beach at Coney Island: 1020


  • The ice age number — the total number of ice crystals necessary to form the ice age: 1030


Douglas Hoftstadter in Metamagical Themas mentions a few others:



  • The Hemoglobin number — the number of hemoglobin molecules in the human body: 6 x 1021


  • Rubik’s constant — the number of possible configurations of a Rubik’s cube: 4.3 x 1019


Huge numbers, to be sure. Larger still is the infamous googol, which is 10100. Now consider the googolplex, which is 10googol. It is said that such a number could never be written down, as it would require more space than is available in the known universe.



But even these numbers are peanuts compared to some other numbers which have been envisioned. Consider tetration, Knuth’s up-arrow notation, Steinhaus-Moser notation, or Conway’s chained arrow notation.


Finally, just to fry your brain a little more, consider a number proposed in the excellent web comic XKCD, in his strip What XKCD Means”:





The Ackermann function with Graham’s number as its arguments is a number orders of magnitude larger than we can imagine. Actually, you can hardly say it’s “orders of magnitude” larger since that only implies exponentially larger; this number goes way, way, way beyond that!



But it’s still finite. And if you divide it by infinity, you still get zero. That kills me.

Quick and Dirty 3-D Rendering

By using a handy shortcut, you can render 3D objects painlessly.


Usually this is done using transformation matrices. This sort of matrix manipulation was actually discovered back in the 18th century, but like many great mathematical discoveries, had no real practical application. Until the modern era, that is, when someone discovered that this trick was perfect for rendering 3D graphics. It lets you rotate, scale, shear, etc. any image, and the really nice thing is that once you decide all the transformations you want, you can reduce them to a single matrix and so compute the positions of everything you want to draw rather quickly.


But unless you live and breathe the stuff, it’s too complicated to remember how to set it up off the top of your head, and a lot of code even if you know how. So here’s a tip that will get you rendering 3D objects in no time.



First, draw yourself a set of axises, like so:





It doesn’t have to be very accurate, just so as it looks fairly proportional. Now, imagine that each of the arrows is one unit long. You need to write functions that return x and y for a given (x,y,z) point. So ask yourself — for each unit of x, how much do we actually move horizontally? How about for each unit of y? For each unit of z, we don’t move horizontally at all, but for the other two, you can make a guess. Let’s say the tip of the x axis is 16 pixels to the right of the origin, and the tip of the y is 20. Again, accuracy isn’t important, as long as you’re in the ballpark and the numbers are proportional. Since you want it to appear in the center, add an offset that’s half the width of the screen. So your method for determining the screen x coordinate of your point will look something like this:



private int getX( Point p )
{
return CENTER_X + p.x*16 + p.y*20 + p.z*0;
}

Obviously, the p.z*0 could have been left out, but I leave it there for illustration.


Similarly, the screen y coordinate can be determined just by guessing the offset of the tips of the x, y, and z axises on our sketch. Looks like about -10, 5, and -20 respectively (remember y coordinates count down from the top in most computer graphics systems). So our function for determining the screen y of your point will look like this:



private int getY( Point p )
{
return CENTER_Y + p.x*-10 + p.y*5 + p.z*-20;
}

Let’s see it in action. Here’s an applet that defines a few lines in terms of their 3D endpoints, and then draws them on our 2D screen using the methods described above. Specifically, it defines all the lines necessary to draw the cube that stretches from -1 to 1 on each axis. Give it a click to run it:




Enable Java in your browser to see it.



(Java source for main class here. Parent applet here.)


Not bad, huh? It’s handy for quick visualizations, and you’ll probably see it again here before long.

Towers of Hanoi

Any discussion of beauty in programming must necessarily begin with recursion in general, and the Towers of Hanoi in particular. This will be review for anyone who’s studied computer science, but that’s okay since I won’t have many readers at this point anyway.





The Towers of Hanoi is a puzzle of simple construction but fascinating complexity. It consists of three pegs. To start with, on one of the pegs is a number of disks, stacked from largest to smallest. The object is to move the stack of disks to one of the other pegs, with the restrictions that you can only move one disk at a time, and a disk may never be placed on a disk that’s smaller than it.



There’s a beautiful computer algorithm to solve it, and that I want to talk about. But the Towers of Hanoi was a toy long before there were computers. So first, have a play at solving it by yourself. Increase the number of disks if you find it too easy to start with. Most people, when they first start playing with it, end up moving disks pretty much randomly at the start. After a point you may find yourself making small goals, like “Okay, I need to get these smallest three disks over to this peg, so I can move the next biggest disk…” Eventually you realize that, if you’re starting with, say, six disks, you’re going to have to move five of them out of the way (onto the extra peg), then move the sixth and largest disk to the target peg, then move the first five on top of it.


So let’s look at the actual program that solves it. The number of disks has been made a parameter so we can eventually solve it for any number of disks. I’ll add comments to make it exceedingly clear what’s going on.



private static void hanoi( int numberOfPegs, String fromPeg,
String targetPeg, String otherPeg )
{
// First, move all but the largest disk to the extra peg.
if (numberOfPegs > 1)
{
hanoi( /* move */ numberOfPegs-1, /* pegs */
/* from the */ fromPeg,
/* to the */ otherPeg,
/* via the */ targetPeg);
}

// Next, move the largest disk to the target page. We'll just print
// out the move we're making.
System.out.println("Move a disk from " + fromPeg + " to " + targetPeg);

// Lastly, move the disks that we moved out of the way on top of
// the largest disk.
if (numberOfPegs > 1)
{
hanoi( /* move */ numberOfPegs-1, /* pegs */
/* from the */ otherPeg,
/* to the */ targetPeg,
/* via the */ fromPeg);
}
}


We’ve had to add the extra peg as the last parameter, just to keep track of it. But notice how similar it is to the original plan, of moving all but the largest disk out of the way, then moving the largest disk, then moving all the remaining disks on top of it. But as an algorithm it looks like it only sketches the broad plan. It just doesn’t look complex enough to solve that first step — moving all but the largest disk out of the way — not to mention he third step, which is just as complex. But miraculously, it works! Try running the code yourself. How could it possibly do all those intermediate steps?



It turns out that the problem of how to move n-1 disks from the “from” peg to the “other” peg is just a smaller, simpler version of the problem of moving n disks from the “from” peg to the “target” peg. It’s just that we’re now using the “extra” peg as our target peg, and the target peg as the extra peg. So this method will just be solving the problem for smaller and smaller versions of the problem, until eventually it has to solve it for only a single disk. And if you trace the code for what happens when there’s a single disk (when numberOfPegs==1), you’ll see that all it will do is the print statement.



To me, the Towers of Hanoi problem is the most beautiful algorithm there is. And it fits the requirements for a recursive algorithm perfectly; it can be broken up into smaller problems, until it reaches an exceedingly simple case (just moving a single disk).


I wonder how long it took after the first computers were made until the Towers of Hanoi was successfully programmed. Very probably it was done in machine code, using a stack of some sort. But what was the first language that would have allowed it to be done with a nice readable recursive function? I would love to have been a fly on the wall.


There’s lots more to the Towers of Hanoi, which I will cover in a later post.

Hello, world!

Hello, World! And welcome to Computronium.


In this forum I’ll be talking about things that thrill my brain. Mostly, as the name implies, it’ll be about computing, especially the recreational variety, but I’ll be taking brief interludes into math, physics, astronomy, or whatever else strikes my fancy. I’ll be focusing on the bits that are just fun, or interesting, or beautiful to me. Hopefully they will be to you too.



I have met a number of people in my life who enjoy computer programming for its own sake. But there’s also people who think that anything created on a computer is inherently soulless, and that any art created on a computer is cold and robotic. I couldn’t disagree more. Art created on a computer is as human as the person who makes it. But the logic underneath too is beautiful in and of itself.


Though I’ll be presenting things that give me “intellectual” thrills, my goal will always be to simplify, and to explain clearly, without dressing things up in fancy language. I’ll be writing about what I know but also about what I don’t — things I’m curious about. So I’m counting on input from you all. Not everything I’ll be writing about will be deeply profound — some of the topics I’m pondering will require just a few sentences. But even in dusting off these old ideas that I’ve pondered before, I’ve come up with new ways of looking at them, as well as some completely new ideas. I’m sure, with the participation in this forum that I’m hoping for, there’ll be lots and lots of other things that come up.



So, the main topic: computing. Computers themselves are ghastly, I believe, BUT, they’re the only things that let you compute. We’ll not be arguing here the merits of Java versus C++, or talk much about the flavor du jour of software processes. I have preferences in these matters, of course, but the underlying algorithms are what really interest me. Some languages do indeed encourage you to look at problems in interesting ways, but for much of what I’ll be talking about, the choice of programming language is pretty much moot. And as far as process — while I’m totally convinced of the necessity of good software practices, I don’t find a lot of joy in talking about it.