Fruit-picking robotics

In a magazine a year or two ago, I read an article about computers that said of robotics researchers, "They treasure their videos because tomorrow it won't work." I quote from memory and the italics may be a little off. In Victor Davis Hanson's Mexifornia, I read - and I know have the italics accurately placed here - "It is physically hard to pick peaches all day." So if your question is "Are robots picking peaches? Just peaches? Not tree limbs and birdnests and the less wary or nimble farmhands and empty air too?" I'd guess the answer is no.

A website can't be the place to design and demonstrate a robot, even a malfunctioning one, but the basics - shape recognition, further inspection, a decision on where to make the cut - can be modeled in flat space. At first I pictured a device with feelers that closed in, left and right, top and bottom. And then what? I wasn't sure what. Even when the programmer allows for and generously postulates the existence of interfering leaves and immovable branches, even when he makes it up according to his own rules and tells his own machine where these obstructions are and how much they might push back relative to the resistance of the real prize, he knows this just isn't going to work in the field. He's trying to make his robot smarter than it can ever be.

It's like translation software. As symbols and sounds have no meaning until they vault the eyes and ears, so sensations are just mechanics until they climb the arms and spinal cord.

Having decided that, I was going to field a model that postulates you are down on the floor of an orchard, with a sight-guided telescoping limb with five fingers zooming upward and making contact and talking directly back to your own limb. (Not a very great distance, but far enough: orchards I saw in central Washington, coffee in Minas Gerais, and oranges in Portugal and Turkey, all comprised short trees, but in every case, you'd still have to get on a ladder to reach the tippity-top. And in the case of the oranges at least, the prizes were indeed near the tippity-top.) Anyway, you see a fruit; your fingers close around it; you have to decide which direction to pull. The "robot," the mechanical hand that is the extension of your own, does no decision-making. That is all yours. I really think this is where robotics has to go.

And having decided that, I saw another fault, one that programming might widen instead of narrow: the way the mechanical fingers would "talk back" to your own. Briefly I considered coding a Java slider that pushed back according to Hooke's Law, like a simple spring. The computer behind it would measure the distance you (I mean your mechanical finger) had displaced the slider, calculate the back pressure you (I mean your real finger) must be made to feel, and reset the slider. You (the viewer of this webpage) might see a little stutter in the response, and be left wondering how this would translate in the field. But then I thought: why bother? Digital computerization just complicates things. I think even analog computerization would complicate things.

Just do it with hydraulics. The mechanical finger's "pressure sensor" ought to be just a piston that squeezes brake fluid back to a piston mounted under your real finger.

...but more to follow...

On further meditation, I think there is room for programmable robotics after all. Not necessarily in the fruit-feeling part - and is there a computer-science term for "task that is easier to do than the broader problem is to solve?" - nor in the fruit-finding part, but in the fruit-reaching part. I plan to program, first in Python and then in HTML5, a model of an multi-elbowed arm that works its way around obstructions and reaches a target. The exercise will do us all good!