“What nature is trying to tell us here is that it doesn’t really work, but the industry believes their own press clippings so much they just can’t see it,” he adds.
Even de Freitas’ DeepMind colleagues Jackie Kay and Scott Reed, who worked with him on Gato, were more reluctant when I asked them directly about his claims. When asked if Gato was heading towards AGI, they would not be drawn. “I don’t think it’s really possible to make predictions with things like that. I try to avoid that. It’s like predicting the stock market,” Kay said.
Reed said the question was difficult. “I think most people with machine learning will eagerly avoid answering. Very difficult to predict, but you know, hopefully one day we’ll make it.”
In a way, the fact that DeepMind called Gato a “generalist” may have made it a victim of the AI industry’s excessive hype about AGI. Today’s AI systems are referred to as “narrow” AI, meaning they can only perform specific, restricted tasks, such as B. Generate text.
Some technologists, including those at Deepmind, believe that humans will one day develop “broader” AI systems that will be able to perform as well or even better than humans. Some call this artificial “general” intelligence. Others say it’s like “believe in magic”. Many top researchers, like Meta’s lead AI scientist Yann LeCun, question whether it’s even possible.
Gato is a “generalist” in the sense that it can do many different things at the same time. But that’s a world away from a “general” AI that can meaningfully adapt to new tasks that differ from those for which the model was trained, says MIT’s Andreas. “We’re still a long way from that.”
Also, scaling up models isn’t going to solve the problem that models don’t have “lifelong learning,” meaning they can be taught things once and they’ll understand all the implications and will use them to inform all the other decisions they make will do, he says.
The hype surrounding tools like Gato is detrimental to overall AI development, argues Emmanuel Kahembwe, AI/robotics researcher and part of Black in AI, co-founded by Timnit Gebru. “There are a lot of interesting topics that are being left aside, that are underfunded, that deserve more attention, but that’s not what the big tech companies and the mass of researchers in tech companies care about,” he says.
Tech companies should step back and take stock of why they build what they build, says Vilas Dhar, president of the Patrick J. McGovern Foundation, a charity that funds AI projects “forever.”
“AGI speaks to something deeply human – the idea that we can become more than we are by building tools that propel us to greatness,” he says. “And that’s really nice, except it’s also a way to distract us from the fact that we have real problems today that we should be addressing with AI.”