Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI, this is probably a good thing for Europe. We need more well capitalized labs that aren't US or China centric and while I do like Mistral, they just haven't been keeping up on the frontier of model performance and seem like they've sort of pivoted into being integration specialists and consultants for EU corporations. That's fine and they've got to make money, but fully ceding the research front is not a good way to keep the EU competitive.
LeCun's technical approach with AMI will likely be based on JEPA, which is also a very different approach than most US-based or Chinese AI labs are taking.
If you're looking to learn about JEPA, LeCun's vision document "A Path Towards Autonomous Machine Intelligence" is long but sketches out a very comprehensive vision of AI research:
https://openreview.net/pdf?id=BZ5a1r-kVsf
Training JEPA models within reach, even for startups. For example, we're a 3-person startup who trained a health timeseries JEPA. There are JEPA models for computer vision and (even) for LLMs.
You don't need a $1B seed round to do interesting things here. We need more interesting, orthogonal ideas in AI. So I think it's good we're going to have a heavyweight lab in Europe alongside the US and China.
Hm, Singapour looks more like "one of their base"; they will have offices in Paris, Montréal, Singapour and New York (according to both this article and the interview Yann Le Cun did this morning on France Inter, the most listened radio in France).
Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
For such companies, France also offers generous R&D tax credits (Crédit Impôt Recherche): companies can recover roughly 30% of eligible R&D expenses incurred in France as a tax credit, which can eventually be refunded (in cash) if the company has no taxable profit.
I didn't really know who he was, so I went and found his wikipedia, which is written like either he wrote it himself to stroke his ego, or someone who likes him wrote it to stroke his ego:
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
You underestimate academia. Any academic that reads these two sentences only focuses on the first one: He has a named chair at Courant. In Germany, being a a Prof is added to your ID card/passport and becomes part of your official name, like knighthood in other countries.
It's not comparing him to anyone. He has an endowed professorship. This is standard in academia, and you give the name because a) it's prestigious for the recipient and b) it strokes the ego of the donor.
That’s not a comparison to another person. That’s his job title. It is not uncommon for universities to have distinguished chairs within departments named after a notable person—in this case, the founder of NYU’s Department of Computer Science.
There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body.
> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
> Even with continuous backpropagation and "learning"
That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.
> They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Can you be a bit more specific at all bounds? Maybe via an example?
> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.
> If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI.
I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.
It also calls this Einstein an AGI rather than a disabled human???
I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data.
But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.
That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows).
The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.
> The fact that models aren't continually updating seems more like a feature.
I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.
I have a pet peeve with the concept of "a genuinely novel discovery or invention", what do you imagine this to be? Can you point me towards a discovery or invention that was "genuinely novel", ever?
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses.
There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume.
W Brian Arthur's book "The Nature of Technology" provides a framework for classifying new technology as elemental vs innovative that I find helpful. For example the Huntley-Mcllroy diff operates on the phenomenon that ordered correspondence survives editing. That was an invention (discovery of a natural phenomenon and a means to harness it). Myers diff improves the performance by exploiting the fact that text changes are sparse. That's innovation. A python app using libdiff, that's engineering.
And then you might say in terms of "descendants": invention > innovation > engineering. But it's just a perspective.
Suno is transformer-based; in a way it's a heavily modified LLM.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
Sure, but don't conflate the representation format with the structure of what's being represented.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
It’s an interesting observation, but I think you have it backwards. The examples you give are all using discrete symbols to represent something real and communicating this description to other entities. I would argue that all your examples are languages.
Whats the first L stand for? Thats not just vestogial, their model of the world is formed almost exclusively from language rather than a range of things contributing significantly like for humans.
The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more
There will be no "unlocking of AGI" until we develop a new science capable of artificial comprehension. Comprehension is the cornucopia that produces everything we are, given raw stimulus an entire communicating Universe is generated with a plethora of highly advanceds predator/prey characters in an infinitely complex dynamic, and human science and technology have no lead how to artificially make sense of that in a simultaneous unifying whole. That's comprehension.
I feel like I'm the only one not getting the world models hype. We've been talking about them for decades now, and all of it is still theoretical. Meanwhile LLMs and text foundation models showed up, proved to be insanely effective, took over the industry, and people are still going "nah LLMs aren't it, world models will take over soon. Just wait."
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989
Hinton published about contrastive divergance in next token prediction in 2002
Alexnet was 2012
Word2vec was 2013
Seq2seq was 2014
AiAYN was 2017
UnicornAI was 2019
Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
As someone in the tech twitter sphere this is yann and his ideas performing a suplex on LLM based companies. It is completely unfathomable to start an ai research company… Only sell off 20% and have 1 billion for screwing around for a few years.
A fair amount of negative comments here, but Yann might very well be the person who brings the Bell Labs culture back to life. It’s been badly missing, and not just in Europe.
He couldn't achieve at least parity with LLMs during his days at Meta (and having at his disposal billions in resources most probably) but he'll succeed now? What is the pitch?
The pitch isnt to try to squeeze money out of a product like altman does. Its to lay the groundwork for the next evolution in AI. Llms were built on decades of work and theyve hit their limits. We'll need to invest alot of time building foundations without getting any tangible yeild for the next step to work. Get too greedy and youll be stuck
If, for even 1s, they get in a position which is threatening, in any way, Big Tech AI (mostly US based if not all), they will be raided by international finance to be dismantled and poached hardcore with some massive US "investment funds" (which looks more and more as "weaponized" international finance!!). Only china is very immune to international finance. Those funds have tens of thousands of billions of $, basically, in a world of money, there is near zero resistance.
Once again, US companies and VCs are in this seed round. Just like Mistral with their seed round.
Europe again missing out, until AMI reaches a much higher valuation with an obvious use case in robotics.
Either AMI reaches over $100B+ valuation (likely) or it becomes a Thinking Machines Lab with investors questioning its valuation. (very unlikely since world models has a use-case in vision and robotics)
I can't read the article, but American investors investing into European companies, isn't US the one missing out here? Or does "Europe" "win" when European investors invest in US companies? How does that work in your head?
Adds up : We are seeing a clear exodus of both capital and talent from the US - with the current US administration’s shift toward cronyism - and the EU stands as the most compelling alternative with a uniform market of 500 million people and the last major federation truly committed to the rule of law.
Here you can see why it is so hard to compete as European startup with US startups - abysmal access to money. Investment of 1B USD in Europe is glorified as largest seed ever, but in USA it is another Tuesday.
For a foundation AI lab with a world famous AI researcher at the helm though, it's not so impressive. Won't even touch the sides of the hardware costs they'd need to be anywhere near competitive
Europeans have free healthcare and retirement. They consider putting their money with long term benefits not just become CEO on Tuesday and declare bankruptcy on Wednesday.
Retirement is the worst.
You are basically forced to pay into a unsustainable system ( at least in Germany ).
It already has to be subsidized by taxes .
It is an universal system but definitely not free .
In Germany you pay on average 17.5% of your salary for healthcare insurance and 18.6% for retirement .
However contribution caps exists . 70k for healthcare and 100k for retirement .
Regardless of your opinion of Yann or his views on auto regressive models being "sufficient" for what most would describe as AGI or ASI, this is probably a good thing for Europe. We need more well capitalized labs that aren't US or China centric and while I do like Mistral, they just haven't been keeping up on the frontier of model performance and seem like they've sort of pivoted into being integration specialists and consultants for EU corporations. That's fine and they've got to make money, but fully ceding the research front is not a good way to keep the EU competitive.
LeCun's technical approach with AMI will likely be based on JEPA, which is also a very different approach than most US-based or Chinese AI labs are taking.
If you're looking to learn about JEPA, LeCun's vision document "A Path Towards Autonomous Machine Intelligence" is long but sketches out a very comprehensive vision of AI research: https://openreview.net/pdf?id=BZ5a1r-kVsf
Training JEPA models within reach, even for startups. For example, we're a 3-person startup who trained a health timeseries JEPA. There are JEPA models for computer vision and (even) for LLMs.
You don't need a $1B seed round to do interesting things here. We need more interesting, orthogonal ideas in AI. So I think it's good we're going to have a heavyweight lab in Europe alongside the US and China.
There seem to be other news articles mentioning that they are setting up in Singapore as their base. https://www.straitstimes.com/business/ai-godfather-raises-1-...
Hm, Singapour looks more like "one of their base"; they will have offices in Paris, Montréal, Singapour and New York (according to both this article and the interview Yann Le Cun did this morning on France Inter, the most listened radio in France).
Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
Probably just a satellite office.
Might be to be close to some of Yann's collaborators like Xavier Bresson at NUS
That's a Singaporian newspaper, though; not sure if it's objectively their main base, or just one of them
"Show me the incentive and I will show you the outcome."
Almost certainly the IP will be held in Singapore for tax reasons.
> they are setting up in Singapore as their base
Europe in general has been tightening up their rules / taxes / laws around startups / companies especially tech and remote.
It's been less friendly. these days.
Yann Le Cun litteraly said this morning on the radio in France that it is headquarted in Paris and will pay taxes in France. Go figure…
No he said something like “well yes, only for the parts of profits made in France”
For such companies, France also offers generous R&D tax credits (Crédit Impôt Recherche): companies can recover roughly 30% of eligible R&D expenses incurred in France as a tax credit, which can eventually be refunded (in cash) if the company has no taxable profit.
Is that alongside 100% of R&D expenses amortized in taxes when a company has taxable profit covering them?
Yes indeed, if the company is profitable.
Doesn’t he live in New York himself? Although not sure if that matters depending on his role
There will be no corporate taxes for a long time, so alls good.
I didn't really know who he was, so I went and found his wikipedia, which is written like either he wrote it himself to stroke his ego, or someone who likes him wrote it to stroke his ego:
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
You underestimate academia. Any academic that reads these two sentences only focuses on the first one: He has a named chair at Courant. In Germany, being a a Prof is added to your ID card/passport and becomes part of your official name, like knighthood in other countries.
https://cims.nyu.edu/dynamic/news/1441/
This is just the official name of a chair at NYU. I'm not even sure Jacob T. Schwartz is more well known than Yann LeCun
Yann is definitely more well-known outside of academia. Inside academia, it's going to depend a lot on your specific background and how old you are.
It's not comparing him to anyone. He has an endowed professorship. This is standard in academia, and you give the name because a) it's prestigious for the recipient and b) it strokes the ego of the donor.
That’s not a comparison to another person. That’s his job title. It is not uncommon for universities to have distinguished chairs within departments named after a notable person—in this case, the founder of NYU’s Department of Computer Science.
Eh, that paragraph reads perfectly normal to me.
Either you have not read enough Wikipedia pages, or you have too much to complain about. (Or both.)
Justifiable.
There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body.
> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
> Even with continuous backpropagation and "learning"
That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.
https://www.sciencedirect.com/science/article/pii/S089662732...
> They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Can you be a bit more specific at all bounds? Maybe via an example?
I'm sure that if a car appeared from nowhere in the middle of your living room, you would not be prepared at all.
So my question is: when is there enough training data that you can handle 99.99% of the world ?
> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.
> If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI.
I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.
It also calls this Einstein an AGI rather than a disabled human???
I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data.
But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.
It could possibly be useful but I don't see why it would be AGI.
That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows).
Where does that training data come from?
The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.
> The fact that models aren't continually updating seems more like a feature.
I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.
The term LLM is confusing your point because VLMs belong to the same bin according to Yann.
Using the term autoregressive models instead might help.
I have a pet peeve with the concept of "a genuinely novel discovery or invention", what do you imagine this to be? Can you point me towards a discovery or invention that was "genuinely novel", ever?
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
Genuinely novel discovery or invention?
Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses.
There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume.
W Brian Arthur's book "The Nature of Technology" provides a framework for classifying new technology as elemental vs innovative that I find helpful. For example the Huntley-Mcllroy diff operates on the phenomenon that ordered correspondence survives editing. That was an invention (discovery of a natural phenomenon and a means to harness it). Myers diff improves the performance by exploiting the fact that text changes are sparse. That's innovation. A python app using libdiff, that's engineering. And then you might say in terms of "descendants": invention > innovation > engineering. But it's just a perspective.
Suno is transformer-based; in a way it's a heavily modified LLM.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
You can learn a lot by playing with Suno.
Novel things can be incremental. I don't think LLMs can do that either, at least I've never seen one do it.
Whether it is text or an image, it is just bits for a computer. A token can represent anything.
Sure, but don't conflate the representation format with the structure of what's being represented.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
It’s worth noting how our human relationship or understanding of our world model changed as our tools to inspect and describe our world advanced.
So when we think about capturing any underlying structure of reality itself, we are constrained by the tools at hand.
The capability of the tool forms the description which grants the level of understanding.
why LLMs (transformers trained on multimodal token sequences, potentially containing spatiotemporal information) can't be a world model?
https://medium.com/state-of-the-art-technology/world-models-...
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
But they don't only operate on language? They operate on token sequences, which can be images, coordinates, time, language, etc.
It’s an interesting observation, but I think you have it backwards. The examples you give are all using discrete symbols to represent something real and communicating this description to other entities. I would argue that all your examples are languages.
Whats the first L stand for? Thats not just vestogial, their model of the world is formed almost exclusively from language rather than a range of things contributing significantly like for humans.
The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more
There will be no "unlocking of AGI" until we develop a new science capable of artificial comprehension. Comprehension is the cornucopia that produces everything we are, given raw stimulus an entire communicating Universe is generated with a plethora of highly advanceds predator/prey characters in an infinitely complex dynamic, and human science and technology have no lead how to artificially make sense of that in a simultaneous unifying whole. That's comprehension.
Ironically, your comment is practically incomprehensible.
These two comments above me capture Slashdot in the early 2000s.
A lot more justifiable than say, Thinking Machines at least. But we will "see".
World models and vision seems like a great use case for robotics which I can imagine that being the main driver of AMI.
Yann LeCun seeks $5B+ valuation for world model startup AMI (Amilabs).
He has hired LeBrun to the helm as CEO.
AMI has also hired LeFunde as CFO and LeTune as head of post-training.
They’re also considering hiring LeMune as Head of Growth and LePrune to lead inference efficiency.
https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-ne...
Why didn't they just call it LeLabs?
I feel like I'm the only one not getting the world models hype. We've been talking about them for decades now, and all of it is still theoretical. Meanwhile LLMs and text foundation models showed up, proved to be insanely effective, took over the industry, and people are still going "nah LLMs aren't it, world models will take over soon. Just wait."
> But this is not an applied AI company.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.
Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
He was suffocated by the corporate aspect Meta I suspect.
https://archive.is/20260310070651/https://www.ft.com/content...
I wish him luck.
Recently all papers are about LLM, it brings up fatigue.
As GPT is almost reaching its limit, new architecture could bring out new discovery.
This could have been 1000 seed rounds. We are creating technological deserts by going all-in on AI and star personalities.
Because for these investors the opportunity cost of this is higher than other startups.
I agree with you; there should be more diversity in investments in EU startups, but ¯\_(ツ)_/¯ not my money.
Seems like it's the second largest seed round anywhere after Thinking Machines Labs? https://news.crunchbase.com/venture/biggest-seed-round-ai-th...
That article is from June 2025 so may be out of date, and the definition of "seed round" is a bit fuzzy.
Thinking Machines looks half-dead already.
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
Was just a grift
As someone in the tech twitter sphere this is yann and his ideas performing a suplex on LLM based companies. It is completely unfathomable to start an ai research company… Only sell off 20% and have 1 billion for screwing around for a few years.
I liken this to watching a godzilla esque movie. Just grab some popcorn and enjoy the ride.
A fair amount of negative comments here, but Yann might very well be the person who brings the Bell Labs culture back to life. It’s been badly missing, and not just in Europe.
https://archive.is/TEwfi
I just saw a post from Yann mentioning that AMI Labs is hiring too!
He couldn't achieve at least parity with LLMs during his days at Meta (and having at his disposal billions in resources most probably) but he'll succeed now? What is the pitch?
The pitch isnt to try to squeeze money out of a product like altman does. Its to lay the groundwork for the next evolution in AI. Llms were built on decades of work and theyve hit their limits. We'll need to invest alot of time building foundations without getting any tangible yeild for the next step to work. Get too greedy and youll be stuck
If, for even 1s, they get in a position which is threatening, in any way, Big Tech AI (mostly US based if not all), they will be raided by international finance to be dismantled and poached hardcore with some massive US "investment funds" (which looks more and more as "weaponized" international finance!!). Only china is very immune to international finance. Those funds have tens of thousands of billions of $, basically, in a world of money, there is near zero resistance.
Once again, US companies and VCs are in this seed round. Just like Mistral with their seed round.
Europe again missing out, until AMI reaches a much higher valuation with an obvious use case in robotics.
Either AMI reaches over $100B+ valuation (likely) or it becomes a Thinking Machines Lab with investors questioning its valuation. (very unlikely since world models has a use-case in vision and robotics)
> Europe again missing out
I can't read the article, but American investors investing into European companies, isn't US the one missing out here? Or does "Europe" "win" when European investors invest in US companies? How does that work in your head?
It is well enough to attract worthy talents & produce interesting outcomes.
Not based on true valuation unless h-index has become a valuation metric lol
Academics don’t always make great entrepeneurs
That being sad, Yann LeCun's twitter reposts are below average IQ.
Do you have a recent example ?
Adds up : We are seeing a clear exodus of both capital and talent from the US - with the current US administration’s shift toward cronyism - and the EU stands as the most compelling alternative with a uniform market of 500 million people and the last major federation truly committed to the rule of law.
You lost me at “uniform”…
"Exodus of capital" as if OpenAI didn't just raise 115b
Here you can see why it is so hard to compete as European startup with US startups - abysmal access to money. Investment of 1B USD in Europe is glorified as largest seed ever, but in USA it is another Tuesday.
A billion seed is not an every day event anywhere.
Not at all. A quick google turns up evidence of 4. There may be more but I think probably not many.
For a foundation AI lab with a world famous AI researcher at the helm though, it's not so impressive. Won't even touch the sides of the hardware costs they'd need to be anywhere near competitive
Europeans have free healthcare and retirement. They consider putting their money with long term benefits not just become CEO on Tuesday and declare bankruptcy on Wednesday.
It is not free, we just pay taxes.
Retirement is the worst. You are basically forced to pay into a unsustainable system ( at least in Germany ). It already has to be subsidized by taxes .
Free healthcare and retirement ?
It is an universal system but definitely not free . In Germany you pay on average 17.5% of your salary for healthcare insurance and 18.6% for retirement . However contribution caps exists . 70k for healthcare and 100k for retirement .
„free“
A startup getting 1B net worth is so rare that such companies are called unicorns.
As the other commenter pointed out, this is 1B seed.
actually, they raised $1.03 billion at a $3.5 billion valuation.