There was a failure to recognize the deep problems in AI; for instance, those captured in Blocks World. The people building physical robots learned nothing.
Sentiment: NEGATIVE
When I was building robots in the early 1990s, the problems of voice recognition, image understanding, VOIP, even touchscreen technologies - these were robotics problems.
We do know that we can set certain algorithms for machines to do certain things - now that may be a simple task. A factory robot that moves one object from here to there. That's a very simple top-down solution. But when we start creating machines that learn for themselves, that is a whole new area that we've never been in before.
We wanted to solve robot problems and needed some vision, action, reasoning, planning, and so forth. We even used some structural learning, such as was being explored by Patrick Winston.
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
Humans have 3 percent human error, and a lot of companies can't afford to be wrong 3 percent of the time anymore, so we close that 3 percent gap with some of the technologies. The AI we've developed doesn't make mistakes.
As Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a 'singularity.'
The benefits of having robots could vastly outweigh the problems.
Robots have a rich and storied history in movies.
Robots are good at things that are structured.
Some of the world's greatest feats were accomplished by people not smart enough to know they were impossible.
No opposing quotes found.