Can We Stop Ourselves
05-07-2023Roger Berkowitz
Technology is not bad. On its own, social media, the internet, and artificial intelligence are tools. They can be used to further humanity and also to destroy it. This is true for artificial intelligence today. It can be used to increase food production and cure illnesses. But it can also be used to create intelligent machines who would make humans superfluous and robotic warriors who might eliminate those superfluous humans. The danger in AI is not necessarily in the technology itself, but in how we will use it. And Geoffrey Hinton, one of the pioneers of AI research, has recently quit his job at Google to spread the word of his fears that AI will be used in ways that will do fantastic harm. “It is hard to see,” he says, “how you can prevent the bad actors from using it for bad things.” In an essay on Hinton’s worries, Cade Metz writes:
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.