The observations he makes about artificial intelligence are curiously prosaic: “AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers,” he says. He offers odd truisms such as, “If we can connect a human brain to the internet it will have all of Wikipedia as its resource.” And when it comes to answering the question of the future of humanity, he is strangely timid. Will we have genetic engineering of humans? Probably. “Of course, many people will say that genetic engineering on humans should be banned. But I rather doubt that they will be able to prevent it…” Does he think it’s a bad idea? Not exactly. “In a way, the human race needs to improve its mental and physical qualities if it is to deal with the increasingly complex world around it and meet new challenges such as space travel.” Michael Brooks, "The hawking of Stephen: is Brief Answers to the Big Questions more spin than science?" at New StatesmanAlong with Sir Martin Rees, Elon Musk, and Henry Kissinger, among many lesser knowns, the late Stephen Hawking worried about an AI apocalypse (the “worst event in the history of our civilization”).
What makes the views of otherwise very bright people seem so "prosaic"?
One factor is that they don't seem to grasp the underlying situation with respect to artificial intelligence. Here are two areas seldom considered:
1. What would we need to make machines “intelligent”? We don’t even understand animal intelligence clearly. Are seals really smarter than dogs? What about the fact that plants can communicate to adjust to their circumstances, without a mind or brain? Where does that place plants with respect to intelligence? How are we to understand the importance of the brain they lack?
Incidentally, humans with seriously compromised brains can have consciousness. Needless to say, no one has the slightest idea what human consciousness is. Even sober discussions in science magazines include propositions such as the one that your coffee mug may be conscious too. In which case, our laptops are already conscious; no need for high-tech tweaks. But somehow, that doesn’t really work
as a solution…
2. Analysts who work in the artificial intelligence industry try to explain that machines don’t do meaning, that we cannot by definition design intelligences greater than ourselves, and that no combination of random and deterministic processing can increase mutual information (Levin’s Law).
Doomsday prophets do not dispute these problems so much as they don’t consider them seriously. Research, we are told, will find a way around anything. But is that a reasonable basis for forecasting?
One thing a celebrity pundit can usually count on is an audience of media professionals who haven’t considered the problems carefully either and don’t want to. It is much easier and more profitable to market Doomsday than Levin’s Law. As always, the fact that laws governing the universe will eventually triumph is true but not news.
Note: Stephen Hawking (1942–2018) himself owed a good deal to high tech, of course. Diagnosed with ALS in his late twenties, he also lost the ability to speak due to a pneumonia episode in 1985 but regained it via a voice synthesizer:
Hawking is very attached to his voice: in 1988, when Speech Plus gave him the new synthesizer, the voice was different so he asked them to replace it with the original. His voice had been created in the early '80s by MIT engineer Dennis Klatt, a pioneer of text-to-speech algorithms. He invented the DECtalk, one of the first devices to translate text into speech. He initially made three voices, from recordings of his wife, daughter and himself. The female's voice was called "Beautiful Betty", the child's "Kit the Kid", and the male voice, based on his own, "Perfect Paul." "Perfect Paul" is Hawking's voice. Joao Medeiros, "How Intel Gave Stephen Hawking A Voice" at Wired (2015)A large number of skilled technicians worked both hard, patiently, creatively to make that possible. Doubtless, others have benefitted from the research too.
See also: Noted astronomer envisions cyborg on Mars
AI machines taking over the world? It’s a cool apocalypse but does that make it more likely?
Software pioneer says general superhuman artificial intelligence is very unlikely The concept, he argues, shows a lack of understanding of the nature of intelligence