Zuck announces that Meta will join the AGI chase

Meta promises advances "in every area of AI"

Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.

Zuck announced last week:

  • Meta has “long term goals of building general intelligence, open sourcing it responsibly, and making it available and useful to everyone in all of our daily lives”

  • They have 600,000 Nvidia H100 GPU equivalents of compute - to put this into perspective OpenAI’s GPT-4 was trained on 8,000 H100 GPUs for 100 days. Meta now has 75 times that number, with $10.5 billion of H100s alone.

  • They are training Llama 3 - the successor to the Meta’s popular open source language model Llama 2, which will presumably match OpenAI’s GPT-4. This puts the open source release of a GPT-4 equivalent on roughly a 3 month fuse, assuming a similar development trajectory as Llama 2.

  • They promise advances “in every area of AI. From reasoning to planning, to coding, to memory and other cognitive abilities“

Now, the interesting thing of course is that Zuck’s AI Chief, Yann LeCun, has been very negative about OpenAI’s approach to AGI, its feasibility and whether it’s even possible in a reasonable timeframe:

..autoregressive [models] like chatGPT .. simply cannot reason nor plan. ..They have a very superficial knowledge of the underlying reality.

- Yann LeCun, 01/13/2024

And in fact, Yann does not believe AGI will come from imitating higher human functions, but that base animal cognition will first have to be traversed:

Too often, we think a task is easy because some animal can do it. But the reality is that the task is fiendishly complex and the animal is much smarter than we think.

Conversely, we think tasks like playing chess, calculating an integral, or producing grammatically correct text are complex because only some humans can do them after years of training. But it turns out these things aren't that complicated and computers can do them much better than us.

This is why the phrase "Artificial General Intelligence" to designate human-level intelligence makes absolutely no sense

- Yann LeCun, 01/13/2024

Zuck pushes back on this, however, saying:

I don’t have a one-sentence, pithy definition. You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.

- Mark Zuckerberg, the Verge

The crux of this debate goes way back, and it was just a fond intellectual exercise, until the talent started to move:

..The planning expert at OpenAI is Noam Brown, who worked on Libratus (poker) and Cicero (Diplomacy) at FAIR (Meta), both of which use planning. I suspect he has something to do with Q*. I don't think it's the kind of breakthrough the Twittersphere makes it to be.

People need to calm down.

- Yann LeCun, 11/23/2023

That then brought down the hammer from on high:

.. we need to build for general intelligence. I think that’s important to convey because a lot of the best researchers want to work on the more ambitious problems.

- Mark Zuckerberg, the Verge

Which is really a fascinating window to a meeting where the CEO tells the research chief, “You’re telling me we lost our planning guy to OpenAI, and he may end up inventing AGI? …I don’t care what you call it. The market calls it AGI, the market wants it, and that’s what we’re going to give them. Posthaste s'il vous plait.

Become a subscriber for daily breakdowns of what’s happening in the AI world:

Join the conversation

or to participate.