Stephen Hawking - AI Will Be Either Best or Worst For Humanity

 
STEPHEN HAWKING HAS WARNED ABOUT THE DANGERS OF ARTIFICIAL INTELLIGENCE

Stephen Hawking told that the development of full artificial intelligence - COULD SPELL THE END OF THE HUMAN RACE.
 
Professor Stephen Hawking has warned that the creation of powerful artificial intelligence will be either the best or the worst thing ever to happen to humanity and praised the creation of an academic institute dedicated to researching the future of intelligence as crucial to the future of our civilization and our species.

Hawking was speaking at the opening of the Leverhulme Center for the Future of Intelligence (LCFI) at Cambridge University, a multi disciplinary institute that will attempt to tackle some of the open ended questions raised by the rapid pace of development in AI research.

We spend a great deal of time studying history - Hawking said. Which let’s face it is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.

While the world renowned physicist has often been cautious about AI raising the risk that humanity could be the architect of its own destruction if it creates a super intelligence with a will of its own he was also quick to highlight the positives that AI research can bring.

The potential benefits of creating intelligence are huge - he said. We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution we will be able to undo some of the damage done to the natural world by the last one industrialization. And surely we will aim to finally eradicate disease and poverty.

Every aspect of our lives will be transformed. In short success in creating AI could be the biggest event in the history of our civilization.

Huw Price, the Center’s Academic Director and the Bertrand Russell Professor of Philosophy at Cambridge University, where Hawking is also an academic, said that the center came about partially as a result of the university’s Center for Existential Risk. That institute mocked by the tabloid press as offering “Terminator Studies” examined a wider range of potential problems for humanity, while the LCFI has a narrow focus.

We’ve been trying to slay the "terminator" meme - Price said, but like its namesake it keeps coming back for more.

AI pioneer Margaret Boden, Professor of Cognitive Science at the University of Sussex, praised the progress of such discussions. As recently as 2009 she said the topic wasn’t taken seriously even among AI researchers. AI is hugely exciting she said but it has limitations which present grave dangers given uncritical use.

The academic community is not alone in warning about the potential dangers of AI as well as the potential benefits. A number of pioneers from the technology industry most famously the entrepreneur Elon Musk, have also expressed their concerns about the damage that a super intelligent AI could wreak on humanity.

No comments:

Post a Comment