Dancing robots. I remember in 2000 when Asimo took its very calculated first steps. It was terrifying (robot overlords came to mind) and fascinating (the technology!) The math required for a robot to move at all is migraine inducing in complexity.
But to walk? Mystifying.
At least to someone that is befuddled by anything higher than pre-algebra, and sees higher math as nothing but pure magic, such as myself.
But the walking Asimo, and its many successors, never simulated “real walking” or that fluid “human” movement. It was amazing, to be sure. And yet… not quite right. Something as simple as “walking” appeared too complex for man to figure out how to make a robot do.
And now they’re fucking dancing on pointe and moving with a grace that mimics, all to well, that of their human makers. Together. In unison.
What happened? Engineering certainly improved. The physical makeup of the robot allows for the movement. But the movement itself comes from the software, or the algorithms inside the robot.
Now, I’m a developer. And I love science. But I’m no scientist and again, I have issues with math. But I learned something from a conversation in another membership group, from someone that is much more “up” on all of this than I am.
They pointed out something absolutely fascinating with this dancing robot video. How did we go from robots that can’t make it up a flight of stairs to … dancing?
Watch this clip from 2105 where Dr Ken Stanley talks about Artificial Intelligence, objectives and the Novelty Search algorithm that lead to walking robots. His theory is that by creating objectives… you’re not going to achieve them.
Apparently, the fallacy in getting a robot to walk like a human is in programming it … to walk.
In other words: programming the outcome.
To get a robot to actually walk is done by programming it not to walk, but to do something different every time it tries to. To say “you fell on your face? Was it a different way of falling from last time? Wonderful. Try something new again.” And that way it learned oscillation and eventually to… walk. To move, not based on the objective programming it received, but to move based on how it had moved previously and continually tried something new.
What I’m wrapping my brain around here is this idea that what led to the robot walking had little to do with achieving the objective of walking… It was about learning how to walk.
The distinction is specific. (Very different “whys” here, too.)
In the programming of learning how to walk, a fall was rewarded, rather than mitigated. Because a fall was not a failure, it was another step in the process of learning to go forward. As it learned to go forward, discoveries were made along the way (like balance) and it happened to learn to walk.
Do you see where I’m going with this?
How can we tell a computer “don’t achieve a goal, discover one.” Don’t be perfect. Try. Learn. Try again. And not give our already fallible human selves the same programming?
Give yourself the permission to just try something. To learn to walk. To fall.
And see what comes of it.