I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.
To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.
No really.
You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.
But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.
No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.
We can debate 'intelligence' until the sun dies out and will still never be satisfied.
But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.
(oh man, just read that back, I think I need to take a day off here, youch!)
> You somehow managed to get real people to chat with bots and pay to do so.
He's_Outta_Line_But_He's_Right.gif
Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.
That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.
So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.
And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.
I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.
Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".
But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.
On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..
Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.
That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.
Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.
From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.
We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.
We don't need very powerful AI to do very powerful things.
note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think
If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.
It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.
Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.
The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )
I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.
I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.
Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs
But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.
> But AGI is important in the sense that it have a huge impact on the path humanity takes
The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?
AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?
The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.
And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).
There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.
>There’s no consistent, universally accepted definition.
That's because of the I part. An actual complete description accepted by different practices in the scientific community.
"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).
It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.
Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.
Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.
In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).
An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.
> And from my brief experience on this planet I don't believe that premise.
A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.
So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism
Then you've missed the part of software.
Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.
If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.
We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.
But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.
It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.
What you're mentioning is like the difference between digital vs analog music.
For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.
In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.
You can approximate reality, but it'll never quite be reality.
I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.
Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.
The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.
So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.
Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.
LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.
Agreed. A hot take I have is that I think AI is over-hyped in its long-term capabilities, but under-hyped in its short-term ones. We're at the point today or in the next twelve months where all the frontier labs could stop investing any money into research, they'd still see revenue growth via usage of what they've built, and humanity will still be significantly more productive every year, year-over-year, for quite a bit, because of it.
The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.
Apparently Dwarkesh's podcast is a big hit in SV -- it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.
And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?
30 years away seems rather unlikely to me, if you define AGI as being able to do the stuff humans do. I mean like Dawkesh says:
>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.
Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.
A lot of Kurzweil's predictions are nowhere close to coming correct though.
For example, he thought by 2019 we'd have millions of nanorobots in our blood, fighting disease and improving cognition. As near as I can tell we are not tangibly closer to that than we were when he wrote about it 25 years ago. By 2030, he expected humans to be immortal.
There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.
A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.
Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.
"Our methods study the model indirectly using a more interpretable “replacement model,” which incompletely and imperfectly captures the original."
"(...) we build a replacement model that approximately reproduces the activations of the original model using more interpretable components. Our replacement model is based on a cross-layer transcoder (CLT) architecture (...)"
"Remarkably, we can substitute our learned CLT features for the model's MLPs while matching the underlying model's outputs in ~50% of cases."
"Our cross-layer transcoder is trained to mimic the activations of the underlying model at each layer. However, even when it accurately reconstructs the model’s activations, there is no guarantee that it does so via the same mechanisms."
These two papers were designed to be used as the sort of argument that you're making. You point to a blog post that glazes over it. You have to click through the "Read the paper" to find a ~100 page paper, referencing another ~100 page paper to find any of these caveats. The blog post you linked doesn't even feature the words "replacement (model)" or any discussion of the reliability of this approach.
Yet it is happy to make bold claims such as "we look inside Claude 3.5 Haiku, performing deep studies of simple tasks representative of ten crucial model behaviors" which is simply not true.
Sure, they added to the blog post: "the mechanisms we do see may have some artifacts based on our tools which don't reflect what is going on in the underlying model" but that seems like a lot of indirection when the fact is that all observations commented in the papers and the blog posts are about nothing but such artifacts.
I don’t think that is the core of this paper. If anything the paper shows that LLMs have no internal reasoning for math at all. The example they demonstrate is that it triggers the same tokens in randomly unrelated numbers. They kind of just “vibe” there way to a solution
> sometimes this "chain of thought" ends up being misleading; Claude sometimes makes up plausible-sounding steps to get where it wants to go. From a reliability perspective, the problem is that Claude’s "faked" reasoning can be very convincing.
If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.
Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.
They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.
I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.
On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.
I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.
Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.
You cannot have AGI without a physical manifestation that can generate its own training data based on inputs from the external outside world with e.g. sensors and constantly refine its model.
Pure language or pure image-models are just one aspect of intelligence - just very refined pattern recognition.
You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.
But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.
Explosive growth? Interesting. But at some point, human civilization hits a saturation point. There’s only so much people can eat, wear, drive, stream, or hoard. Extending that logic, there’s a natural ceiling to demand - one that even AGI can’t code its way out of.
Sure, you might double the world economy for a decade, but then what? We’ll run out of people to sell things to. And that’s when things get weird.
To sustain growth, we’d have to start manufacturing demand itself - perhaps by turning autonomous robots into wage-earning members of society. They’d buy goods, subscribe to services, maybe even pay taxes. In effect, they become synthetic consumers fueling a post-human economy.
I call this post-human consumerism. It’s when the synthesis of demand would hit the next gear - if we keep moving in this direction.
One thing in the podcast I found really interesting from a personal pov was:
> I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated.
Don’t tell young people things like this. Predicting the future is hard, and it is the height of hubris to think otherwise.
I remember as a teen, I had thought that I was a supposed to be a pilot for all my life. I was ready to enroll in a school with a two year program.
However, I was also into computers. One person who I looked up to in that world said to me “don’t be a pilot, it will all be automated soon and you will just be buss drivers, at best.” This entirely took the wind out of my piloting sails.
This was in the early 90’s, and 30 years later, it is still wrong.
Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.
If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.
This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?
This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.
Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.
If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?
Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).
Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.
He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.
His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.
The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!
Good advice; and go (re-?) read Minsky's "Society of Mind".
I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.
You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.
AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.
Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.
I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.
What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.
Is it me or the signal/noise is needle in a haystack for all these cheerleader tech podcasts? In general, I really miss the podcast scene from 10 years ago, less polished but more human and with reasonable content. Not this speculative blabber that seems to be looking to generate clickbait clips. I don't know what happened a few years ago, but even solid podcasts are practically garbage now.
I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.
Natural intelligence is too expensive. Takes too long for it to grow. If things go wrong then we have to jail it. With computers we just change the software.
Not artificial, but yes, it's unclear what advantage an artificial person has over a natural one, or how it's supposed to gain special insights into fusion reactor design and etc. even if it can think very fast.
I do not like those who try to play God. The future of humanity will not be determined by some tech giant in their ivory tower, no matter how high it may be. This is a battle that goes deeper than ones and zeros. It's a battle for the soul of our society. It's a battle we must win, or face the consequences of a future we cannot even imagine... and that, I fear, is truly terrifying.
> The future of humanity will not be determined by some tech giant in their ivory tower
Really? Because it kinda seems like it already has been. Jony Ive designed the most iconic smartphone in the world from a position beyond reproach even when he messed up (eg. Bendgate). Google decides what your future is algorithmically, basically eschewing determinism to sell an ad or recommend a viral video. Instagram, Facebook and TikTok all have disproportionate influence over how ordinary people live their lives.
From where I'm standing, the future of humanity has already been cast by tech giants. The notion of AI taking control is almost a relief considering how illogical and obstinate human leadership can be.
Might as well be 10 - 1000 years. Reality is no one knows how long it'll take to get to AGI, because:
1) No one knows what exactly makes humans "intelligent" and therefore 2) No one knows what it would take to achieve AGI
Go back through history and AI / AGI has been a couple of decades away for several decades now.
I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.
To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.
No really.
You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.
But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.
No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.
We can debate 'intelligence' until the sun dies out and will still never be satisfied.
But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.
(oh man, just read that back, I think I need to take a day off here, youch!)
> You somehow managed to get real people to chat with bots and pay to do so.
He's_Outta_Line_But_He's_Right.gif
Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.
That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.
So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.
And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.
I think AGI has to do more than pass a Turning test by someone who wants to be fooled.
By your measure, Eliza was AGI, back in the 1960s.
For me it was twitter bots during the 2016 election, but same principle.
I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.
> But the reality is that we want money
Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".
But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.
On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..
Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.
That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.
Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.
From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.
We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.
We don't need very powerful AI to do very powerful things.
note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
Energy efficiency is not really a good target since you can brute force it by distilling classical ANNs to spiking neural networks.
We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think
If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.
It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.
Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.
Exactly, what does the general in Artificial General Intelligence mean to these people?
I would even go 1 order of magnitude further in both direction. 1-10000 years.
A realist might say, "As long as money keeps flowing to Silicon Valley then who cares."
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.
The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )
I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.
Except only Iron Man had JARVIS.
I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.
> At minimum, it should tell me how confident it feels in the answer it provides.
How’s that work out for Dave Bowman? ;-)
Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs
But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.
> But AGI is important in the sense that it have a huge impact on the path humanity takes
The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?
AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.
AI winter is relative, and it's more about outlook and point of view than actual state of the field.
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
I think you’re saying that you want a faster horse
I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?
The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.
> Is AGI even important?
It's an important question for VCs not for Technologists ... :-)
[flagged]
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.
And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).
There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.
I like chollet's definition: something that can quickly learn any skill without any innate prior knowledge or training.
>There’s no consistent, universally accepted definition.
That's because of the I part. An actual complete description accepted by different practices in the scientific community.
"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"
> There’s no consistent, universally accepted definition
What word or term does?
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).
It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.
Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.
Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.
In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).
An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.
> And from my brief experience on this planet I don't believe that premise.
A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.
So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism
Then you've missed the part of software.
Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.
If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.
> It is science fiction to think that a system like a computer can behave at all like a brain
It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly
Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no
Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?
Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.
We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.
But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?
I guarantee computers are better at generating random numbers than humans lol
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.
It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.
If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.
(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)
What you're mentioning is like the difference between digital vs analog music.
For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.
In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.
You can approximate reality, but it'll never quite be reality.
I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.
> Ask yourself, why is it so hard to get a cryptographically secure random number?
I mean, humans aren't exactly good at generating random numbers either.
And of course, every Intel and AMD CPU these days has a hardware random number generator in it.
Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.
The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.
So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.
Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
You just asked it to design or implement?
If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.
“It does something well” ≠ “it will become AGI”.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.
Designing a distributed scheduler is a solved problem, of course an LLM was able to spit out a solution.
I’ve had similar things over the last couple days with o3. It was one-shotting whole features into my Rust codebase. Very impressive.
I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.
Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.
With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.
Wow, 12 per second on average.
I'm not sure what is your point in context of AGI topic.
Can someone throw some light on this Dwarkesh character? He landed a Zucc podcast pretty early on... how connected is he? Is he an industry plant?
He's awesome.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
https://archive.ph/IWjYP
He was covered on the Economist recently -- I haven't heard of him til now so imagine its not just AI-slop content.
Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.
And in 30 years it will be another 30 years away.
LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.
Got it. So this is now a competition between...
1. Fusion power plants 2. AGI 3. Quantum computers 4. Commercially viable cultured meat
May the best "imminent" fantasy tech win!
People over-estimate the short term and under-estimate the long term.
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
Doesn't even matter. The capabilities of the AI that's out NOW will take a decade or more to digest.
I feel like it's already been pretty well digested and excreted for the most part, now we're into the re-ingestion phase until the bubble bursts.
Agreed. A hot take I have is that I think AI is over-hyped in its long-term capabilities, but under-hyped in its short-term ones. We're at the point today or in the next twelve months where all the frontier labs could stop investing any money into research, they'd still see revenue growth via usage of what they've built, and humanity will still be significantly more productive every year, year-over-year, for quite a bit, because of it.
The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.
Apparently Dwarkesh's podcast is a big hit in SV -- it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.
And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?
30 years away seems rather unlikely to me, if you define AGI as being able to do the stuff humans do. I mean like Dawkesh says:
>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.
Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.
I'll take the "under" on 30 years. Demis Hassabis (who has more credibility than whoever these 3 people are combined) says 5-10 years: https://time.com/7277608/demis-hassabis-interview-time100-20...
That's in line with Ray Kurzweil sticking to his long-held predictions: 2029 for AGI and 2045 for the singularity.
A lot of Kurzweil's predictions are nowhere close to coming correct though.
For example, he thought by 2019 we'd have millions of nanorobots in our blood, fighting disease and improving cognition. As near as I can tell we are not tangibly closer to that than we were when he wrote about it 25 years ago. By 2030, he expected humans to be immortal.
I’m sticking with Kurzweil’s predictions as well, his basic premise of extrapolating from compute scaling has been surprisingly robust.
~2030 is also roughly the Metaculus community consensus: https://www.metaculus.com/questions/5121/date-of-artificial-...
We will never have the required compute by then.
You can’t put a date on AGI until the required technology is invented and that hasn’t happened yet.
This "AGI" definition is extremely loose depending on who you talk to. Ask "what does AGI mean to you" and sometimes the answer is:
1. Millions of layoffs across industries due to AI with some form of questionable UBI (not sure if this works)
2. 100BN in profits. (Microsoft / OpenAI definition)
3. Abundance in slopware. (VC's definition)
4. Raise more money to reach AGI / ASI.
5. Any job that a human can do which is economically significant.
6. Safe AI (Researchers definition).
7. All the above that AI could possibly do better.
I am sure there must be a industry aligned and concrete definition that everyone can agree on rather the goal post moving definitions.
Related: https://en.wikipedia.org/wiki/AI_effect
1. LLM interactions can feel real. Projections and psychological mirroring is very real.
2. I believe that AI researchers will require some level of embodiment to demonstrate:
a. ability to understand the physical world.
b. make changes to the physical world.
c. predict the outcome to changes in the physical world.
d. learn from the success or failure of those predictions and update their internal model of the external world.
---
I cannot quickly find proposed tests in this discussion.
I "love" how the interviewer keeps conflating intelligence with "Hey OpenAI will make $100b"
Fusion power will arrive first. And, it will be needed to power the Cambrian explosion of datacenters just for weak AI.
I could be wrong but AGI maybe a cold fusion or flying cars boondoggle: chasing a dream that no one needs, costs too much, or is best left unrealized.
Huh, so it should be ready around the same time as practical fusion reactors then. I'll warm up the car.
Thirty years. Just enough time to call it quits and head to Costa Rica.
LLMs are basically a library that can talk.
That’s not artificial intelligence.
There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.
A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.
Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.
https://www.anthropic.com/research/tracing-thoughts-language...
Oy vey not this paper again.
"Our methods study the model indirectly using a more interpretable “replacement model,” which incompletely and imperfectly captures the original."
"(...) we build a replacement model that approximately reproduces the activations of the original model using more interpretable components. Our replacement model is based on a cross-layer transcoder (CLT) architecture (...)"
https://transformer-circuits.pub/2025/attribution-graphs/bio...
"Remarkably, we can substitute our learned CLT features for the model's MLPs while matching the underlying model's outputs in ~50% of cases."
"Our cross-layer transcoder is trained to mimic the activations of the underlying model at each layer. However, even when it accurately reconstructs the model’s activations, there is no guarantee that it does so via the same mechanisms."
https://transformer-circuits.pub/2025/attribution-graphs/met...
These two papers were designed to be used as the sort of argument that you're making. You point to a blog post that glazes over it. You have to click through the "Read the paper" to find a ~100 page paper, referencing another ~100 page paper to find any of these caveats. The blog post you linked doesn't even feature the words "replacement (model)" or any discussion of the reliability of this approach.
Yet it is happy to make bold claims such as "we look inside Claude 3.5 Haiku, performing deep studies of simple tasks representative of ten crucial model behaviors" which is simply not true.
Sure, they added to the blog post: "the mechanisms we do see may have some artifacts based on our tools which don't reflect what is going on in the underlying model" but that seems like a lot of indirection when the fact is that all observations commented in the papers and the blog posts are about nothing but such artifacts.
I don’t think that is the core of this paper. If anything the paper shows that LLMs have no internal reasoning for math at all. The example they demonstrate is that it triggers the same tokens in randomly unrelated numbers. They kind of just “vibe” there way to a solution
> sometimes this "chain of thought" ends up being misleading; Claude sometimes makes up plausible-sounding steps to get where it wants to go. From a reliability perspective, the problem is that Claude’s "faked" reasoning can be very convincing.
If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.
[dead]
Grammar engines. Or value matrix engines.
Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.
They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.
I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.
On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.
I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.
Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.
I feel the opposite.
LLMs are unbelievably useful for me - never have I had a tool more powerful to assist my brain work. I useLLMs for work and play constantly every day.
It pretends to sound like a person and can mimic speech and write and is all around perhaps the greatest wonder created by humanity.
It’s still not artificial intelligence though, it’s a talking library.
We invented a calculator for language-like things, which is cool, but it’s got a lot of people really mixed up.
The hype men trying to make a buck off them aren’t helping, of course.
You cannot have AGI without a physical manifestation that can generate its own training data based on inputs from the external outside world with e.g. sensors and constantly refine its model.
Pure language or pure image-models are just one aspect of intelligence - just very refined pattern recognition.
You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.
But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.
The Anthropic's research on how LLMs reason shows that LLMs are quite flawed.
I wonder if we can use an LLM to deeply analyze and fix the flaws.
The new fusion power
That's 20 years away.
It was also 20 years away 30 years ago.
Explosive growth? Interesting. But at some point, human civilization hits a saturation point. There’s only so much people can eat, wear, drive, stream, or hoard. Extending that logic, there’s a natural ceiling to demand - one that even AGI can’t code its way out of.
Sure, you might double the world economy for a decade, but then what? We’ll run out of people to sell things to. And that’s when things get weird.
To sustain growth, we’d have to start manufacturing demand itself - perhaps by turning autonomous robots into wage-earning members of society. They’d buy goods, subscribe to services, maybe even pay taxes. In effect, they become synthetic consumers fueling a post-human economy.
I call this post-human consumerism. It’s when the synthesis of demand would hit the next gear - if we keep moving in this direction.
One thing in the podcast I found really interesting from a personal pov was:
> I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated.
Don’t tell young people things like this. Predicting the future is hard, and it is the height of hubris to think otherwise.
I remember as a teen, I had thought that I was a supposed to be a pilot for all my life. I was ready to enroll in a school with a two year program.
However, I was also into computers. One person who I looked up to in that world said to me “don’t be a pilot, it will all be automated soon and you will just be buss drivers, at best.” This entirely took the wind out of my piloting sails.
This was in the early 90’s, and 30 years later, it is still wrong.
Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.
If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.
This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?
This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.
Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.
If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?
But that's not ten times the workdays. That's just taking a bunch of speed and sitting by yourself worrying about something. Results may be eccentric.
Though I don't know what you mean by "width of a human brain".
Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).
Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.
He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.
His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.
The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!
Good advice; and go (re-?) read Minsky's "Society of Mind".
We sort of are able to recognize Nobel-worthy breakthroughs
One of the many definitions I have for AGI is being able to create the proofs for the 2030, 2050, 2100, etc Nobel Prizes, today
A sillier one I like is that AGI would output a correct proof that P ≠ NP on day 1
Isn't AGI just "general" intelligence as in -like a regular human- turing test kinda deal?
aren't you thinking about ASI/ Superintelligence way capable of outdoing humans?
There's a test for this: https://arcprize.org/arc-agi
Basically a captcha. If there's something that humans can easily do that a machine cannot, full AGI has not been achieved.
you'd be able to give them a novel problem and have them generalize from known concepts to solve it. here's an example:
1 write a specification for a language in natural language
2 write an example program
can you feed 1 into a model and have it produce a compiler for 2 that works as reliably as a classically built one?
I think that's a low bar that hasn't been approached yet. until then I don't see evidence of language models' ability to reason.
I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.
You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.
AI will face the same limitations we face: availability of information and the non deterministic nature of the world.
What do monkeys think about humans?
AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.
Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.
I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.
What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.
Is it me or the signal/noise is needle in a haystack for all these cheerleader tech podcasts? In general, I really miss the podcast scene from 10 years ago, less polished but more human and with reasonable content. Not this speculative blabber that seems to be looking to generate clickbait clips. I don't know what happened a few years ago, but even solid podcasts are practically garbage now.
I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.
Again?
Two more weeks
”‘AGI is x years away’ is a proposition that is both true and false at the same time. Like all such propositions, it is therefore meaningless.”
AGI is never gonna happen - it's the tech equivalent of the second coming of Christ, a capitalist version of the religious savior trope.
Hey now, on a long enough time line one of these strains of millenarian thinking may eventually get something right.
I guess I am agnostic then.
AGI is here today... go have a kid.
That would be "GI". The "A" part implies, specifically, NOT having a kid, eh?
Natural intelligence is too expensive. Takes too long for it to grow. If things go wrong then we have to jail it. With computers we just change the software.
You work for DOGE, don't you?
Not artificial, but yes, it's unclear what advantage an artificial person has over a natural one, or how it's supposed to gain special insights into fusion reactor design and etc. even if it can think very fast.
Good thing the Wolfenstein tech isn't a thing yet hopefully
Hopefully more!
"Literally who" and "literally who" put out statements while others out there ship out products.
Many such cases.
I do not like those who try to play God. The future of humanity will not be determined by some tech giant in their ivory tower, no matter how high it may be. This is a battle that goes deeper than ones and zeros. It's a battle for the soul of our society. It's a battle we must win, or face the consequences of a future we cannot even imagine... and that, I fear, is truly terrifying.
> The future of humanity will not be determined by some tech giant in their ivory tower
Really? Because it kinda seems like it already has been. Jony Ive designed the most iconic smartphone in the world from a position beyond reproach even when he messed up (eg. Bendgate). Google decides what your future is algorithmically, basically eschewing determinism to sell an ad or recommend a viral video. Instagram, Facebook and TikTok all have disproportionate influence over how ordinary people live their lives.
From where I'm standing, the future of humanity has already been cast by tech giants. The notion of AI taking control is almost a relief considering how illogical and obstinate human leadership can be.
TURKS MENTIONED RAAAAAAAAAAAAAAAHHHHH!!1!1!! TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE