Geoffrey Hinton, a renowned psychologist and computer scientist, recently resigned from his job at Google to protest against the development of artificial intelligence used in chatbots such as ChatGPT and Google Bard. His resignation was meant to voice his concerns about the risks that lie ahead with this groundbreaking technology.
Hinton’s life work has developed the basis of the complex AI systems currently used by tech companies. However, in a recent interview, he stated this had brought him regret because of AI’s possible implications on society. Regarding his exit from Google, the scientist said, “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”
“These things are totally different from us,” he continued. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.” Hinton spoke about the processing power of artificial intelligence, which is far superior to human capabilities. “Our brains have 100 trillion connections,” Hinton stated. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
Hinton explained that AI could easily outperform humans in learning new things. He said, “People seemed to have some kind of magic. Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.”
The scientist explained one of his top fears that the internet will be overrun with false information propagated by AI that is so convincing that people won’t be able to distinguish it from the truth. “My fear is that the internet will be inundated with false photos, videos, and text, and the average person will not be able to know what is true anymore,” he said.
Long term, Hinton worries that AI will exceed human intelligence far sooner than we think. “The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Earlier this year, thousands of experts and industry leaders signed a petition to slow the development of AI until its risks can be fully understood and regulations can be put into place. The American Tribune wrote:
A recent letter from the Future of Life Institute called for a six-month pause on the development of AI systems for researchers to fully understand the risks of the technology and build safeguards against it. Elon Musk, among many other experts, signed the letter in support. The letter opens, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The letter concludes, “Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
"*" indicates required fields