We wanted to solve robot problems and needed some vision, action, reasoning, planning, and so forth. We even used some structural learning, such as was being explored by Patrick Winston.
Sentiment: NEGATIVE
We do know that we can set certain algorithms for machines to do certain things - now that may be a simple task. A factory robot that moves one object from here to there. That's a very simple top-down solution. But when we start creating machines that learn for themselves, that is a whole new area that we've never been in before.
I ultimately got into robotics because for me, it was the best way to study intelligence.
Robots are good at things that are structured.
Our robots are signing up for online learning. After decades of attempts to program robots to perform complex tasks like flying helicopters or surgical suturing, the new approach is based on observing and recording the motions of human experts as they perform these feats.
One of the interesting applications of symbolic systems is artificial intelligence, and I spent some time thinking about how to create a brain that operates the way ours does.
If you wanted to design a robot that could learn as well as it possibly could, you might end up with something that looked a lot like a 3-year-old.
When David Marr at MIT moved into computer vision, he generated a lot of excitement, but he hit up against the problem of knowledge representation; he had no good representations for knowledge in his vision systems.
I had been impressed by the fact that biological systems were based on molecular machines and that we were learning to design and build these sorts of things.
When I was building robots in the early 1990s, the problems of voice recognition, image understanding, VOIP, even touchscreen technologies - these were robotics problems.
There was a failure to recognize the deep problems in AI; for instance, those captured in Blocks World. The people building physical robots learned nothing.