DeepMind researcher claims new AI could lead to AI, says ‘game over’

According to Dr Nando de Freitas, Principal Investigator at Google’s DeepMind, it appears humanity is on the cusp of bringing Artificial General Intelligence (AGI) into our lives.

In response to an opinion piece I really penned, the scientist posted a thread on Twitter that started with what may be the boldest statement we’ve seen from anyone at DeepMind regarding the current progress toward artificial general intelligence:

My opinion: it’s all about the range now! game over!

human greetings

Subscribe now for a weekly summary of our favorite AI stories

This is the full text of De Freitas’ thread:

Someone’s opinion article. My opinion: it’s all about the range now! game over! It’s about making these models bigger, safer, computing efficient, faster sampling, smarter memory, more methods, innovative data, on/offline, … 1/N

Solving these expansion challenges is what will bring Artificial General Intelligence. Research focused on these issues, such as the S4, is needed to increase memory. Philosophy of symbols is not. Tokens are tools in the world and large networks have no problem creating and manipulating them 2 / n

Finally and most importantly, [OpenAI co-founder Ilya Sutskever] ilyasut is right [cat emoji]

Rich Sutton is also right, but the AI ​​lesson is not bitter but rather sweet. I learned it from [Google researcher Geoffrey Hinton] geoffreyhinton a decade ago. Jeff predicted what was expected with extraordinary clarity.

There’s a lot to untangle on this topic, but “it’s all about scale now” is a phrase that’s hard to misinterpret.

How did we get here?

DeepMind recently released a research paper and posted a blog about its new multimedia AI platform. The system, called JATO, is capable of performing hundreds of different tasks ranging from controlling a robotic arm to writing poetry.

The company called it a “generic” system, but it didn’t go so far as to say it was in any way capable of general intelligence – you can learn more about what that means here.

It’s easy to confuse something like Gato with AGI. However, the difference is that general intelligence can learn to do new things without prior training.

In my article, I compared the Gato to a gaming console:

Gato’s ability to multitask is more like a video game console that can store 600 different games, rather than a game you can play 600 different ways. It’s not General Artificial Intelligence, it’s a neatly assembled, pre-trained set of tight models.

That’s not a bad thing, if that’s what you’re looking for. But there is simply nothing in Gato’s accompanying research paper to indicate that this is a quick peek into the right direction for AGI, let alone a starting point.

Dr. de Freitas disagrees with this view. That’s not surprising, but what I found shocking was the second tweet in their thread:

The top “Philosophy About Symbols” was probably written in direct response to my opinion piece. But as much as Gotham’s criminals know what Pat’s signal means, those who follow the AI ​​world know that mentioning codes and AI together is a surefire way to call out Gary Marcus.

Enter running

Marcus, a world-renowned scientist, author, founder, and CEO of Robust.AI, has spent the past several years advocating a new approach to artificial general intelligence. He believes that the entire field needs to change its underlying methodology for building artificial general intelligence, and he wrote a popular book for this purpose called “Rebooting Artificial Intelligence” with Ernest Davis.

He has discussed and discussed his ideas with everyone from Yann LeCun of Facebook to Yoshua Bengio of the University of Montreal.

And for the opening edition of his Substack newsletter, Marcos took de Freitas’ remarks as a fiery (but respectful) expression of refutation.

Marcus describes over-scaling of AI models as an AGI-aware path of “Uber Alles scaling”, and refers to these systems as attempts at “alternative intelligence” – as opposed to artificial Intelligence that attempts to imitate human intelligence.

On the topic of DeepMind exploration, he wrote:

There is nothing wrong, per se, in pursuing alternative intelligence.

Alt Intelligence represents a hunch (or more accurately, a set of axioms) about how to build intelligent systems, and since no one yet knows how to build any kind of system that matches human resilience and intelligence, it’s certainly fair game for people to pursue multiple different hypotheses about How to get there.

Nando de Freitas spins you in your face as much as possible about defending this premise, which I will refer to as Scaling-Uber-Alles. Of course, that name, Scaling-Uber-Alles, isn’t entirely fair.

De Freitas knows very well (as I will discuss below) that you can’t just make models bigger and hope for success. People have been doing a lot of expansion lately, and they’ve had some great successes, but they’ve also faced some roadblocks.

Marcus goes on to describe the problem of incomprehension plaguing the giant models of the AI ​​industry.

In essence, Marcus seems to argue that no matter how cool and amazing systems like OpenAI’s DALL-E (a model that generates custom images from descriptions) or DeepMind’s Gato, they are still incredibly fragile.

he is writing:

DeepMind’s newest star, just unveiled, Gato, is capable of cross-media feats never seen before in AI, but nonetheless, when you look at the fine print, you’re stuck in the same land of unreliability, moments of brilliance paired with unreliability. Absolute understanding.

Of course, it is not uncommon for advocates of deep learning to make the reasonable point that humans also make mistakes.

But anyone outspoken will realize that these kinds of errors reveal that something is, at the moment, very wrong. If any of my children routinely make mistakes like this, I will, without exaggeration, drop everything I do and bring them to the neurologist immediately.

While this is worth laughing out loud, there is a serious undertone there. When a researcher at DeepMind declares that “it’s game over,” he conjures up a vision of the near or near future that doesn’t make sense.

AGI? truly?

Neither Gato, DALL-E and GPT-3 are powerful enough for unrestricted general consumption. Each of them requires solid filters to keep them from leaning toward bias, and worse, none of them are able to consistently produce solid results. Not only because we haven’t discovered the secret sauce to coding AI, but also because human problems are often difficult and they don’t always have a single, trainable solution.

It is unclear how scaling, even with advanced logic algorithms, can fix these problems.

This is not to say that giant-sized models are not useful or worthwhile endeavors.

What DeepMind, OpenAI, and similar labs do is very important. It is a state of the art science.

But are you declaring that the game is over? To suggest that AGI will emerge from a system whose distinctive contribution lies in how models are served? Gato is amazing, but this feels like a stretch.

There is nothing in de Freitas’ spirited refutation to change my mind.

The creators at Gato are clearly great. I’m not pessimistic about AGI because Gattu isn’t amazing enough. Rather, the opposite is true.

I’m afraid AI will be decades – perhaps centuries – away because of Gato, DALL-E, and GPT-3. All of them show a breakthrough in our ability to interact with computers.

It’s no miracle to see a machine pull off Copperfield-esque feats of misdirection and preemptive action, especially when you understand that said machine is no smarter than a toaster (and obviously dumber than the dumbest mouse).

To me, it’s clear that we need more than just… more…to take modern artificial intelligence which is equivalent to “Is this your card?” For the Gandalfian sorcery of the AGI that we promised.

As Marcus concludes in his newsletter:

If we want to build AI, we will need to learn something from humans, how they think and understand the physical world, and how they represent and acquire complex language and concepts.

It is sheer arrogance to believe otherwise.



2022-05-16 22:30:00

Leave a Comment

Your email address will not be published.