- Talk of AI agents is everywhere at Davos. AI pioneer Yoshua Bengio warned against them.
- Bengio said AGI-powered agents could lead to “catastrophic scenarios.”
- Bengio is researching how to build non-agent systems to keep agents in check.
Artificial intelligence pioneer Yoshua Bengio was at the World Economic Forum in Davos this week with a message: AI agents can end badly.
The topic of AI agents – artificial intelligence that can act independently of human input – has been a favorite at this year’s gathering in snowy Switzerland. The event has drawn a collection of pioneering AI researchers to debate where AI is headed next, how it should be governed, and when we might see signs of machines that can reason as well as humans—a milestone known as Artificial General Intelligence. (AGI).
“All catastrophic scenarios with AGI or surveillance happen if we have agents,” Bengio told BI in an interview. He said he believes it is possible to achieve AGI without building agent systems.
“All AI for science and medicine, all the things they care about, it’s not an agent,” Bengio said. “And we can continue to build more powerful systems that are non-agentic.”
Bengio, a Canadian research scientist whose early research in deep learning and neural networks laid the foundation for the modern AI boom, is considered one of the “gods of AI” along with Geoffrey Hinton and Yann Lecun. Like Hinton, Bengio has warned against the potential harms of AI and called for collective action to mitigate the risks.
After two years of AI testing, businesses recognize the tangible return on investment provided by AI agents, which could enter the workforce significantly as soon as this year. Openai, which does not have a presence at this year’s Davos, this week unveiled an AI agent that can surf the web for you and perform tasks such as making restaurant reservations or adding groceries to your cart. Google has been eyeing a similar tool of its own.
The problem Bengio sees is that people will keep building agents no matter what, especially when competing companies and countries worry that others will go to agent AI before them.
“The good news is that if we build non-agent systems, they can be used to control agent systems,” he told BI.
One way would be to build more sophisticated “monitors” that can do this, though that would require significant investment, Bengio said.
He also called for national regulations that would prevent AI companies from building agent models without first proving the system would be safe.
“We can advance our science of safe and capable AI, but we have to accept the risks, understand scientifically where they come from, and then make the technological investments to make it happen before it’s too late, and we build things that can destroy us, “said Bengio.
‘I want to raise a red flag’
Before speaking to BI, Bengio spoke on a panel about AI security with Google Deepmind CEO Demis Hassabis.
“I want to raise a red flag. This is the most dangerous path,” Bengio told the audience when asked about AI agents. He showed how AI can be used for scientific discovery, such as DeepMind’s progress in protein folding, as examples of how it can still be deep without being an agent. Bengio said he believes it is possible to go AGI without giving AI agency.
“It’s a gamble, I agree,” he said, “but I think it’s a worthwhile bet.”
Hassabis agreed with Bengio that steps should be taken to mitigate the risks, such as safeguarding cyber security or experimenting with agents in simulations before releasing them. That would only work if everyone agreed to build them the same way, he added.
“Unfortunately I think there’s an economic gradient, beyond science and workers, that people want their systems to be agents,” Hassabis said. “When you say, ‘I recommend a restaurant,’ why wouldn’t you want the next step, which is, book the table.”