A comment over on Reddit made me realise that I haven’t yet mentioned a key characteristic of my Sheldon robot project – inaccuracy.
I mean, I could* built a robot with incredible precision that would be able to locate and move itself to tolerances of fractions of a millimetre. Think of those amazing pick-and-place robots that populate circuit boards with hundreds of minuscule components. They can perform and repeat actions with accuracies measured in fractions of a bee’s dick (to borrow Dave Jones’ memorable term).
But where’s the fun in that?
When I posted about the Sheldon project on Reddit, someone suggested I put encoders on the motors (because I hadn’t mentioned that it already has them). This is the only way of accurately gauging the robot’s speed, he said. And he’s right. But what if I don’t want that accuracy?
Two good ideas
The amateur robotics group to which I belonged in the mid-1980s was led by a fascinating chap, Richard, with a remarkable idea that has stayed with me to this day. Well, two ideas actually.
The first idea was that we should build robots capable of existing in a world designed for humans. This seems less radical now, but back then all robots that were doing genuinely useful things, such as manufacturing, were operating in environments designed specifically for them. The human world doesn’t consist solely of well-lit, clutter-free rooms with perfectly flat floors. Remember that old joke about the Daleks from Dr Who being defeated by the cunning use of staircases? It wasn’t very funny to robot fan-boys of the 1980s.
The second idea was to do away with precision – partly because achieving it usually leads to heavy, bulky and power-hungry machinery, but mostly because it’s expensive. The traditional notion in robotics is to strive for a machine that can be told to move an actuator to specific X, Y and Z coordinates and expect it to be able to do that reliably and virtually without error. Or maybe it should be able to measure distances to within a nanometre.
Those capabilities are out of reach for me, but I don’t want them anyway. And there is an alternative.

Powerful, but not much fun
Take the example of a robot arm picking up a ball. The traditional approach would be to use sensors (cameras, lasers, ultrasound or whatever) to locate the ball’s position in space to a high level of accuracy. Knowing the current position of the robot’s hand, your computer can then calculate a series of commands to operate motors than will move the hand to exactly the ball’s location. The robot can do this because of the low level of error (or what I prefer to call ‘wobble’) in its actuators and joints.

That’s more like it…
Richard’s approach was different. First, establish the relative positions of ball and hand, but don’t worry too much about wobble. Start moving the hand towards the ball. Go into a loop where we repeatedly ask: where’s the ball? Where’s the hand? Are they close? Once the hand gets close to the ball, slow down – maybe progressively. Then rely on touch sensors on the (now) slow-moving hand to know when it has contacted the ball.
You can achieve this with much lower-grade mechanisms and components. For the 1980s, it was an ambitious goal, especially considering the available vision systems then. But I still think it’s an interesting approach because it has less to do with building high-performance machinery (not something that interests me) and more to do with coming up with creative algorithms – which definitely is interesting.
—
* No I couldn’t.