It quotes the same couple of cherry-picked AI researchers as all the other stories – Andrew Ng, Yann Le Cun, etc – then stops without mentioning whether there are alternate opinions. AI researchers, including some of the leaders in the field, have been instrumental in raising issues about AI risk and superintelligence from the very beginning.
To this must be conjoined an overriding concern for the benefit of humanity. The research questions are beginning to be formulated and range from highly technical (foundational issues of rationality and utility, provable properties of agents, etc.) to broadly philosophical.An experimental approach has also been promoted by Ben Goertzel in a nice blog post on friendly AI.If there is a coming era of safe (not too intelligent) AGI then we will have time to think further about later more dangerous eras.The point where computer can do anything humans can do will require the second milestone.Hans Moravec (wiki) is a former professor at the Robotics Institute of Carnegie Mellon University, namesake of Moravec’s Paradox, and founder of the See Grid Corporation for industrial robotic visual systems.