CAMBRIDGE, Mass. (AP) 鈥 Computer scientists who helped build the foundations of today鈥檚 artificial intelligence technology are warning of its dangers, but that doesn鈥檛 mean they agree on what those dangers are or how to prevent them.
Humanity's survival is threatened when "smart things can outsmart us,鈥 so-called Godfather of AI Geoffrey Hinton said at a conference Wednesday at the Massachusetts Institute of Technology.
鈥淚t may keep us around for a while to keep the power stations running,鈥 Hinton said. 鈥淏ut after that, maybe not.鈥
After retiring from Google so he could , the 75-year-old Hinton said he's recently changed his views about the reasoning capabilities of the computer systems he's spent a lifetime researching.
鈥淭hese things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,鈥 Hinton said, addressing the crowd attending MIT Technology Review's EmTech Digital conference from his home via video. 鈥淓ven if they can鈥檛 directly pull levers, they can certainly get us to pull levers.鈥
鈥淚 wish I had a nice simple solution I could push, but I don鈥檛,鈥 he added. 鈥淚鈥檓 not sure there is a solution.鈥
Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the , told The Associated Press on Wednesday that he's 鈥減retty much aligned鈥 with Hinton's concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say 鈥淲e're doomed鈥 is not going to help.
鈥淭he main difference, I would say, is he鈥檚 kind of a pessimistic person, and I鈥檓 more on the optimistic side,鈥 said Bengio, a professor at the University of Montreal. 鈥淚 do think that the dangers 鈥 the short-term ones, the long-term ones 鈥 are very serious and need to be taken seriously by not just a few researchers but governments and the population.鈥
There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what's being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations
But all the talk of the has some worried that hype around superhuman machines 鈥 which don't exist 鈥 is distracting from attempts to set practical safeguards on current AI products that are largely unregulated and to cause .
Margaret Mitchell, a former leader on Google鈥檚 AI ethics team, said she鈥檚 upset that Hinton didn鈥檛 speak out during his decade in a position of power at Google, especially after the 2020 ouster of , who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google鈥檚 Bard.
鈥淚t鈥檚 a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and , all of these issues that are actively harming people who are marginalized in tech,鈥 said Mitchell, who was also forced out of Google in the aftermath of Gebru's departure. 鈥淗e鈥檚 skipping over all of those things to worry about something farther off.鈥
Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today's AI applications such as ChatGPT.
Bengio, the only one of the three who didn't take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilization, automated weaponry and the dangers of biased data sets.
But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI's latest model, GPT-4.
Bengio said Wednesday he believes the latest AI language models already pass the 鈥淭uring test鈥 named after introduced in 1950 to measure when AI becomes indistinguishable from a human 鈥 at least on the surface.
鈥淭hat鈥檚 a milestone that can have drastic consequences if we鈥檙e not careful,鈥 Bengio said. 鈥淢y main concern is how they can be exploited for nefarious purposes to destabilize democracies, for cyberattacks, disinformation. You can have a conversation with these systems and think that you鈥檙e interacting with a human. They鈥檙e difficult to spot.鈥
Where researchers are less likely to agree is on how current AI language systems 鈥 which have many limitations, including a tendency to fabricate information 鈥 might actually get smarter than humans not just in memorizing huge troves of information, but in showing critical reasoning and other human skills.
Aidan Gomez was one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique 鈥 the 鈥淭鈥 at the end of ChatGPT 鈥 for improving the performance of machine-learning systems, especially in how they learn from passages of text. Then just a 20-year-old intern at Google, Gomez remembers laying on a couch at the company's California headquarters when his team sent out the paper around 3 a.m. when it was due.
鈥淎idan, this is going to be so huge,鈥 he remembers a colleague telling him, of the work that's since helped lead to new systems that can generate humanlike prose and imagery.
Six years later and now CEO of his own AI company called Cohere, which Hinton has invested in, Gomez is enthused about the potential applications of these systems but bothered by fearmongering he says is 鈥渄etached from the reality鈥 of their true capabilities and 鈥渞elies on extraordinary leaps of imagination and reasoning."
鈥淭he notion that these models are somehow gonna get access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse to have,鈥 Gomez said. "It鈥檚 harmful to those real pragmatic policy efforts that are trying to do something good.鈥
Asked about his investments in Cohere on Wednesday in light of his broader concerns about AI, Hinton said he had no plans to pull his investments because there are still many helpful applications of language models in medicine and elsewhere. He also said he hadn't made any bad decisions in pursuing the research he started in the 1970s.
鈥淯ntil very recently, I thought this existential crisis was a long way off,鈥 Hinton said. 鈥淪o I don鈥檛 really have any regrets about what I did.鈥