Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
Sentiment: NEGATIVE
AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies.
Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.
Humans have 3 percent human error, and a lot of companies can't afford to be wrong 3 percent of the time anymore, so we close that 3 percent gap with some of the technologies. The AI we've developed doesn't make mistakes.
I definitely fall into the camp of thinking of AI as augmenting human capability and capacity.
It's going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
Ultimately, I hypothesize that technology will one day be able to recreate a realistic representation of us as a result of the plethora of content we're creating converging with other advances in machine learning, robotics and large-scale data mining.
Since the rise of Homo sapiens, human beings have been the smartest minds around. But very shortly - on a historical scale, that is - we can expect technology to break the upper bound on intelligence that has held for the last few tens of thousands of years.
We need to make a greater investment in human intelligence.
There was a failure to recognize the deep problems in AI; for instance, those captured in Blocks World. The people building physical robots learned nothing.
No opposing quotes found.