If we have been complaining about bloat before, the amount of bloat we are going to witness in the future is unfathomable. How can anyone be proud of a claim like "It's 3M+ lines of code across thousands of files." _especially_ when a lot of this code is relying on external dependencies? Less code is almost always better, not more!
I'm also getting really tired of claims like "we are X% more productive with AI now!" (that I'm hearing day in and out at work and LinkedIn of course). Didn't we, as an industry, agree that we _didn't_ know how to measure productivity? Why is everyone believing all of these sudden metrics that try to claim otherwise?
Look, I'm not against AI. I'm finding it quite valuable for certain scenarios -- but in a constrained environment and with very clear guidance. Letting it loose with coding is not one of them, and the hype is dangerous by how much it's being believed.
I love the quote from Gregory Terzian, one of the servo maintainers:
> "So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine."
It hurts, that it wasn't framed as an "Experiment" or "Look, we wanted to see how far AI can go - kinda failed the bar." Like it is, it pours water on the mills of all CEOs out there, that have no clue about coding, but wonder why their people are so expensive when: "AI can do it! D'oh!"
I wish your recent interview had pushed much harder on this. It came across as politely not wanting to bring up how poorly this really went, even for what the engineer intended.
They were making claims without the level of rigor to back them up. There was an opportunity to learn some difficult lessons, but—and I don’t think this was your intention—it came across to me as kind of access journalism; not wanting to step on toes while they get their marketing in.
The claims they made really weren't that extreme. In the blog post they said:
> To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files. You can explore the source code on GitHub.
> Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.
That's all true.
On Twitter their CEO said:
> We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.
> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.
> It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.
That's mostly accurate too, especially the "it kind of works" bit. You can take exception to "from-scratch" claim if you like. It's a tweet, the lack of nuance isn't particularly surprising.
In the overall genre of CEO's over-hyping their company's achievements this is a pretty weak example.
I think the people making out that Cursor massively and dishonestly over-hyped this are arguing with a straw man version of what the company representatives actually said.
> That's mostly accurate too, especially the "it kind of works" bit. You can take exception to "from-scratch" claim if you like. It's a tweet, the lack of nuance isn't particularly surprising.
> In the overall genre of CEO's over-hyping their company's achievements this is a pretty weak example
I kind of agree, but kind of not. The tweet isn't too bad when read from an experienced engineer perspective, but if we're being real then the target audience was probably meant to be technically clueless investors who don't and can't understand the nuance.
What people take issue with is the claim that agents built a web browser "from scratch" only to find by looking deeper that they were using Servo, WGPU, Taffy, winit, and other libraries which do most of the heavy lifting.
It's like claiming "my dog filed my taxes for me!" when in reality everything was filled out in TurboTax and your dog clicked the final submit button. Technically true, but clearly disingenuous.
I'm not saying an LLM using existing libraries is a bad thing--in fact I'd consider an LLM which didn't pull in a bunch of existing libraries for the prompt "build a web browser" to be behaving incorrectly--but the CEO is misrepresenting what happened here.
Did you read the comment that started this thread? Let me repeat that, ICYMI:
> "So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine."
It didn't use Servo, and it wasn't just calling dependencies. It was terribly slow and stupid, but your comment is more of a mischaracterization than anything the Cursor people have said.
You're right in the sense it didn't `use::servo`, merely Servo's CSS parser `cssparser`[0] and Servo's DOM parser `html5ever`[1]. Maybe that dog can do taxes after all.
> But it was accompanied by a link to the GitHub repo, so you can hardly claim that they were deliberately hiding the truth.
Well, yes and no; we live in an era where people consume headlines, not articles, and certainly not links to Github repositories in articles. If VCs and other CEOs read the headline "Cursor Agents Autonomously Create Web Browser From Scratch" on LinkedIn, the project has served its purpose and it really doesn't matter if the code compiles or not.
The fact that the codebase is meaningless drivel has already been established, you don’t need to defend them. It’s just pure slop, and they’re trying to get people to believe that it’s a working browser. At the time he bragged about that `cargo build` didn’t even run! It was completely broken going back a hundred commits. So it was a complete lie to claim that it “kind of works”.
You have a reputation. You don’t need to carry water for people who are misleading people to raise VC money. What’s the point of you language lawyering about the precise meaning of what he said?
“No no, you don’t get it guys. I’m technically right if you look at the precise wording” is the kind of silly thing I do all the time. It’s not that important to be technically right. Let this one go.
I’m super impressed by how "zillions of lines of code" got re-branded as a reasonable metric by which to measure code, just because it sounds impressive to laypeople and incidentally happens to be the only thing LLMs are good at optimizing.
It really is insane. I really thought we had made progress stamping out the idea that more LOC == better software, and this just flies in the face of that.
I was in a meeting recently where a director lauded Claude for writing "tens of thousands of lines of code in a day", as if that metric in and of itself was worth something. And don't even get me started on "What percentage of your code is written by AI?"
Exactly. I once worked on a large project where the primary contractor was Accenture. They threw a party when we hit a million lines of C++. I sat in the back at a table with the other folks who knew enough to realize it should have been a wake.
I completely agree. The issue is that some misconceptions just never go away. People were talking about how bad lines of code is as a metric in the 1980s [1]. Its persistence as a measure of productivity only shows to me that people feel some deep-seated need to measure developer productivity. They would rather have a bad but readily-available metric than no measure of productivity.
That's what got me. I've never written a browser from scratch but just telling me that it took millions of lines of code made me feel like something was wrong. Maybe somehow that's what it takes? But I've worked in massive monorepos that didn't have 3million lines of code and were able to facilitate an entire business's function.
To be fair, it easily takes 3 million lines of code to make a browser from scratch. Firefox and Chrome both have around ten times that(!) – presumably including tests etc. But if the browser is in large part third-party libraries glued together, that definitely shouldn't take 3 million lines.
Depending on how functional you want the browser to be. I can technically write a web browser in a few lines of perl but you wouldn't get any styling, let alone javascript. Plus 90% of the code is likely going to fixing compatibility issues with poorly designed sites.
FastRender isn't "in large part third-party libraries glued together". The only dependency that fits that bill in my opinion is Taffy for CSS grid and flexbox layout.
The rest is stuff like HarfBuzz for font rendering which is an entirely cromulent dependency for a project like this.
KPIs are slowly destroying the American economy. The idea that everything can be easily measured meaningfully with simple metrics by laypeople is a myth propagated by overpaid business consultante. It's absurd and facetious. Every attempt to do so is degrading and counter-productive.
The problem is that Western societies shifted into a "zero trust" mode - on all levels. It begins with something like being able to leave your house door unlocked after going for work to that not being reasonable due to thefts and vandalism, and it ends with insane amounts of "dumb capital" being flushed into public companies by ETFs and other investment vehicles.
And the latter is what's driving the push for KPIs the most - "active" ETFs already were bad enough because their managers would ask the companies they invested in to provide easily-to-grok KPIs (so that they could keep more of the yearly fee instead of having to pay analysts to dig down into a company's finances), and passive ETFs make that even worse because there is now barely any margin left to pay for more than a cursory review.
America's desire for stock-based pensions is frying the world's economy with its second and third order effects. Unfortunately, that rotten system will most probably only collapse when I'm already dead, so there is zero chance for most people alive today to ever see a world free of this BS.
These 'metrics' are deliberately meant to trick investors into throwing money into hyped up inflated companies for secondary share sales because it sounds like progress.
The reality was the AI made an uncompilable mess, adding 100+ dependencies including importing an entire renderer from another browser (servo) and it took a human software engineer to clean it all up.
> According to Perplexity, my AI chatbot of choice, this week‑long autonomous browser experiment consumed in the order of 10-20 trillion tokens and would have cost several million dollars at then‑current list prices for frontier models.
Don't publish things like that. At the very least link to a transcript, but this is a very non-credible way of reporting those numbers.
That implies a throughput of around 16 million tokens per second. Since coding agent loops are inherently sequential—you have to wait for the inference to finish before the next step—that volume seems architecturally impossible. You're bound by latency, not just cost.
I think it's impressive for what it is: this level of complexity being reached by an ai-only workflow. Previously, anything of modest complexity required a lot of human guidance - and even with that had some serious shortcomings and crutches. If you extrapolate that the models themselves, the frameworks for inter-model workflows, the tooling available to the models and the hardware running them are all accelerating - it's not hard to envision where this will get to, and that this is a notable achievement particlarly when comparing with the amount of effort and resources put into what we currently see in a browser engine: many decades and countless millions of man-hours.
Fully agree that the original authors made some unsubstantiated and unqualified claims about what was done - which is sad, because it was still a huge accomplishment as i see it.
Just had my manager submit 3 PRs in a language he doesn’t understand (rust) and hasn’t ran or tested and is demanding quick reviews for hundreds of LoCs. These are tools but some people are clueless..
> ...while far off from feature parity with the most popular production browsers today...
What a way to phrase it!
You know, I found a bicycle in the trash. It doesn't work great yet, but I can walk it down a hill. While far off from the level of the most popular supercars today, I think we have made impressive progress going down the hill.
Is there a way to measure the entropy of a piece of software?
Is entropy increasing or decreasing the longer agents work on a code base? If it's decreasing, no matter how slowly, theoretically you could just say "ok, start over and write version 2 using what you've learned on version 1." And eventually, $XX million dollars and YY months of churning later, you'd get something pretty slick. And then future models would just further reduce X and Y. Right?
You would think a CEO with a product that caters to developers would know that everyone was going to clone the repo and check his work. He just squandered a whole lot of credibility.
Management already doesn't trust developers in any way. Why would they believe you, who are clearly just trying to save your job, over a big company who clearly is the future!
Or do you trust your management to make the right decision?
I don't think the point was to say "look, AI can just take care of writing a browser now". I think it was to show just how far the tools have come. It's not meant to be production quality, it's meant to be an impressive demo of the state of AI coding. Showing how far it can be taken without completely falling over.
EDIT: I retract my claim. I didn't realize this had servo as a dependency.
This is entirely too charitable. Basically all this proves is that the agent could run in a loop for a week or so, did anyone doubt that?
They marketed as if we were really close to having agents that could build a browser on their own. They rightly deserve the blowback.
This is an issue that is very important because of how much money is being thrown at it, and that effects everyone, not just the "stakeholders". At some point if it does become true that you can ask an agent to build a browser and it actually does, that is very significant.
At this point in time I personally can't predict whether that will happen or not, but the consequences of it happening seem pretty drastic.
I find it hard to believe after running agents fully autonomously for a week you'd end up with something that actually compiles and at least somewhat functions.
And I'm an optimist, not one of the AI skeptics heavily present on HN.
From the post it sounds like the author would also doubt this when he talks about "glorified autocomplete and refactoring assistants".
That is a good point. It is impressive. Llms from two years ago were impressive, llms a year ago were impressive, and from a month ago even more impressive.
Still, getting "something" to compile after a week of work is very different from getting the thing you wanted.
What is being sold, and invested in, is the promise that LLMs can accomplish "large things" unaided.
But they can't, as of yet, they cannot, unless something is happening in one of the SOTA labs that we don't know about.
They can however accomplish small things unaided. However there is an upper bound, at least functionally.
I just wish everyone was on the same page about their abilities and their limitations.
To me they understand conext well (e.g. the task, build a browser doesn't need some huge specification because specifications already exist).
They can write code competently (this is my experience anyway)
They can accomplish small tasks (my experience again, "small" is a really loose definition I know)
They cannot understand context that doesn't exist (they can't magically know what you mean, but they can bring to bear considerable knowledge of pre-existing work and conventions that helps them make good assumptions and the agentic loop prompts them to ask for clarification when needed)
They cannot accomplish large tasks (again my experience)
It seems to me there is something akin to the context window into which a task can fit. They have this compact feature which I suspect is where this limitation lies. Ie a person can't hold an entire browser codebase in their head, but they can create a general top level mapping of the whole thing so they can know where to reach, where areas of improvement are necessary, how things fit together and what has been and what hasn't been implemented. I suspect this compaction doesn't work super well for agents because it is a best effort tacked on feature.
I say all this speculatively, and I am genuinely interested in whether this next level of capability is possible. To me it could go either way.
I haven't really looked at the fastrender project to say how much of a browser it implements itself, but it does depend on at least one servo crate: cssparser (https://github.com/servo/rust-cssparser).
Maybe there is a main servo crate as well out there, and fastrender doesn't depend on that crate, but at least in my mind fastrender depends on some servo browser functionality.
Yeah, but starting with a codebase that is (at least approaching) production quality and then mangling it into something that's very far from production quality... isn't very impressive.
If you want to learn more about the Cursor project directly from the source I conducted a 47 minute interview with Wilson Lin, the developer behind FastRender, last week.
We talked about dependencies, among a whole bunch of other things.
If I was to spend a trillion tokens on a barely working browser I would have started with the source code of Sciter [0] instead. I really like the premise of an electron alternative that compiles to a 5MB binary, with a custom data store based on DyBASE [1] built into the front end javascript so you can just persist any object you create. I was ready to build software on top of it but couldn't get the basic windows tutorial to work.
anyone remember finding the internet explorer control in windows forms, placing it down, adding some buttons, and telling people you made your own web browser? Maybe this exercise is eternal just in different forms
People thinking this does not matter just because the code is awful, it used dependencies, or whatever, are missing the point.
6 months ago with previous models this was absolutely impossible. One of the biggest limitations of LLMs is their difficulty with long tasks. This has been steadily improving and this experiment was just another milestone. It will be interesting a year from now to test how much better new models fare at this task.
Our modern economy is nearly entirely built on useless bullshit, this is just what it looks like when the ouroboros starts devouring its own tail. It doesn't matter that the product doesn't work; the hype is the product. In our collective nihilism, we have productized faith itself.
I mean, maybe they should have started simple and slowly iterated.
project 1: build a text based browser using ratatui and quickjs.
project 2: base it on project 1. convert to gui, pages should render pure html.
project 3: acid1 compliance. Use constraint based programming to output final render, no animation support.
Every single high-profile story that shows up on the feeds about how LLMs are just about there and coders are doomed, if you actually read them and are a programmer, seems like a story about how LLMs are bad and generate trash code that rarely even looks superficially good and definitely doesn't work.
There was a story going around about LLMs making minesweeper clones, and they were all terrible in extremely dumb ways. The headline wasn't obvious, so I thought the take that people were getting from it is that AI is making the same dumb mistakes that it was making a year ago. Nope. It was people ranting about how coders are going to be out of a job next week. Meanwhile, none of them can do a minesweeper clone with like 50 working examples online, maybe 8 things you have to do right to be perfect, and 9000 articles about minesweeper and even mathematical papers about minesweeper to make everything about the game and its purpose perfectly clear. And then AI generates buttons that don't do anything and timers that don't stop.
> tools like Cursor can be genuinely helpful as glorified autocomplete and refactoring assistants
That suggests a fairly strong anti-AI bias by the author. Anyone who thinks that this is all AI coding tools are today is not actually using them seriously.
That's not to say that this exercise wasn't overhyped, but a more useful, less biased article that's not trying to push an agenda would look at what went right, as well as what went wrong.
Yeah that's one of the real takeaways from this. This will improve over time. People seem to get so put off by hype that they forget there can be things of real significance underneath it. You could make a long list of what's amazing and promising about this "implement a browser" task, despite all its shortcomings.
If we have been complaining about bloat before, the amount of bloat we are going to witness in the future is unfathomable. How can anyone be proud of a claim like "It's 3M+ lines of code across thousands of files." _especially_ when a lot of this code is relying on external dependencies? Less code is almost always better, not more!
I'm also getting really tired of claims like "we are X% more productive with AI now!" (that I'm hearing day in and out at work and LinkedIn of course). Didn't we, as an industry, agree that we _didn't_ know how to measure productivity? Why is everyone believing all of these sudden metrics that try to claim otherwise?
Look, I'm not against AI. I'm finding it quite valuable for certain scenarios -- but in a constrained environment and with very clear guidance. Letting it loose with coding is not one of them, and the hype is dangerous by how much it's being believed.
I love the quote from Gregory Terzian, one of the servo maintainers:
> "So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine."
It hurts, that it wasn't framed as an "Experiment" or "Look, we wanted to see how far AI can go - kinda failed the bar." Like it is, it pours water on the mills of all CEOs out there, that have no clue about coding, but wonder why their people are so expensive when: "AI can do it! D'oh!"
That was from a conversation here on Hacker News the other day: https://news.ycombinator.com/item?id=46624541#46709191
I wish your recent interview had pushed much harder on this. It came across as politely not wanting to bring up how poorly this really went, even for what the engineer intended.
They were making claims without the level of rigor to back them up. There was an opportunity to learn some difficult lessons, but—and I don’t think this was your intention—it came across to me as kind of access journalism; not wanting to step on toes while they get their marketing in.
pushing would definitely stop the supply of interviews/freebies/speaking engagements
I just don't think that's the case.
The claims they made really weren't that extreme. In the blog post they said:
> To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files. You can explore the source code on GitHub.
> Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.
That's all true.
On Twitter their CEO said:
> We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.
> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.
> It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.
That's mostly accurate too, especially the "it kind of works" bit. You can take exception to "from-scratch" claim if you like. It's a tweet, the lack of nuance isn't particularly surprising.
In the overall genre of CEO's over-hyping their company's achievements this is a pretty weak example.
I think the people making out that Cursor massively and dishonestly over-hyped this are arguing with a straw man version of what the company representatives actually said.
> That's mostly accurate too, especially the "it kind of works" bit. You can take exception to "from-scratch" claim if you like. It's a tweet, the lack of nuance isn't particularly surprising.
> In the overall genre of CEO's over-hyping their company's achievements this is a pretty weak example
I kind of agree, but kind of not. The tweet isn't too bad when read from an experienced engineer perspective, but if we're being real then the target audience was probably meant to be technically clueless investors who don't and can't understand the nuance.
What people take issue with is the claim that agents built a web browser "from scratch" only to find by looking deeper that they were using Servo, WGPU, Taffy, winit, and other libraries which do most of the heavy lifting.
It's like claiming "my dog filed my taxes for me!" when in reality everything was filled out in TurboTax and your dog clicked the final submit button. Technically true, but clearly disingenuous.
I'm not saying an LLM using existing libraries is a bad thing--in fact I'd consider an LLM which didn't pull in a bunch of existing libraries for the prompt "build a web browser" to be behaving incorrectly--but the CEO is misrepresenting what happened here.
Did you read the comment that started this thread? Let me repeat that, ICYMI:
> "So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine."
It didn't use Servo, and it wasn't just calling dependencies. It was terribly slow and stupid, but your comment is more of a mischaracterization than anything the Cursor people have said.
You're right in the sense it didn't `use::servo`, merely Servo's CSS parser `cssparser`[0] and Servo's DOM parser `html5ever`[1]. Maybe that dog can do taxes after all.
[0] https://github.com/search?q=repo%3Awilsonzlin%2Ffastrender%2...
[1] https://github.com/search?q=repo%3Awilsonzlin%2Ffastrender+h...
I agree that "from scratch" is a misrepresentation.
But it was accompanied by a link to the GitHub repo, so you can hardly claim that they were deliberately hiding the truth.
How many non developers were going to look at that? They knew exactly what they were doing by saying that.
> But it was accompanied by a link to the GitHub repo, so you can hardly claim that they were deliberately hiding the truth.
Well, yes and no; we live in an era where people consume headlines, not articles, and certainly not links to Github repositories in articles. If VCs and other CEOs read the headline "Cursor Agents Autonomously Create Web Browser From Scratch" on LinkedIn, the project has served its purpose and it really doesn't matter if the code compiles or not.
The fact that the codebase is meaningless drivel has already been established, you don’t need to defend them. It’s just pure slop, and they’re trying to get people to believe that it’s a working browser. At the time he bragged about that `cargo build` didn’t even run! It was completely broken going back a hundred commits. So it was a complete lie to claim that it “kind of works”.
You have a reputation. You don’t need to carry water for people who are misleading people to raise VC money. What’s the point of you language lawyering about the precise meaning of what he said?
“No no, you don’t get it guys. I’m technically right if you look at the precise wording” is the kind of silly thing I do all the time. It’s not that important to be technically right. Let this one go.
Why would he push back? His whole schtick is to sell only AI hype. He’s not going to hurt his revenue.
If I sell only AI hype why do I keep telling people that many systems built on top of LLMs are inherently insecure? https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
That's a great way to tell on yourself that you've never read Simon's work.
the bare minimum of criticism to allow independence to be claimed?
I’m super impressed by how "zillions of lines of code" got re-branded as a reasonable metric by which to measure code, just because it sounds impressive to laypeople and incidentally happens to be the only thing LLMs are good at optimizing.
It really is insane. I really thought we had made progress stamping out the idea that more LOC == better software, and this just flies in the face of that.
I was in a meeting recently where a director lauded Claude for writing "tens of thousands of lines of code in a day", as if that metric in and of itself was worth something. And don't even get me started on "What percentage of your code is written by AI?"
LOC per day metrics are bovine metrics: how many pounds of dung per day.
I'd argue porcine: how many pounds of slop per day.
Every line of code is technical debt. Some of the hardest projects I’ve ever worked on involved deleting as much code as I wrote.
Exactly. I once worked on a large project where the primary contractor was Accenture. They threw a party when we hit a million lines of C++. I sat in the back at a table with the other folks who knew enough to realize it should have been a wake.
Lines of code is just phrenology for software development, but a lot of people are very incentivized to believe in phrenology.
I completely agree. The issue is that some misconceptions just never go away. People were talking about how bad lines of code is as a metric in the 1980s [1]. Its persistence as a measure of productivity only shows to me that people feel some deep-seated need to measure developer productivity. They would rather have a bad but readily-available metric than no measure of productivity.
[1] https://folklore.org/Negative_2000_Lines_Of_Code.html
That's what got me. I've never written a browser from scratch but just telling me that it took millions of lines of code made me feel like something was wrong. Maybe somehow that's what it takes? But I've worked in massive monorepos that didn't have 3million lines of code and were able to facilitate an entire business's function.
To be fair, it easily takes 3 million lines of code to make a browser from scratch. Firefox and Chrome both have around ten times that(!) – presumably including tests etc. But if the browser is in large part third-party libraries glued together, that definitely shouldn't take 3 million lines.
Depending on how functional you want the browser to be. I can technically write a web browser in a few lines of perl but you wouldn't get any styling, let alone javascript. Plus 90% of the code is likely going to fixing compatibility issues with poorly designed sites.
FastRender isn't "in large part third-party libraries glued together". The only dependency that fits that bill in my opinion is Taffy for CSS grid and flexbox layout.
The rest is stuff like HarfBuzz for font rendering which is an entirely cromulent dependency for a project like this.
KPIs are slowly destroying the American economy. The idea that everything can be easily measured meaningfully with simple metrics by laypeople is a myth propagated by overpaid business consultante. It's absurd and facetious. Every attempt to do so is degrading and counter-productive.
Other western economies too. In the UK its destroying the education system too.
The problem is that Western societies shifted into a "zero trust" mode - on all levels. It begins with something like being able to leave your house door unlocked after going for work to that not being reasonable due to thefts and vandalism, and it ends with insane amounts of "dumb capital" being flushed into public companies by ETFs and other investment vehicles.
And the latter is what's driving the push for KPIs the most - "active" ETFs already were bad enough because their managers would ask the companies they invested in to provide easily-to-grok KPIs (so that they could keep more of the yearly fee instead of having to pay analysts to dig down into a company's finances), and passive ETFs make that even worse because there is now barely any margin left to pay for more than a cursory review.
America's desire for stock-based pensions is frying the world's economy with its second and third order effects. Unfortunately, that rotten system will most probably only collapse when I'm already dead, so there is zero chance for most people alive today to ever see a world free of this BS.
Citing the ability to turn on an endless faucet of code as a benefit and not a liability should be disqualifying.
These 'metrics' are deliberately meant to trick investors into throwing money into hyped up inflated companies for secondary share sales because it sounds like progress.
The reality was the AI made an uncompilable mess, adding 100+ dependencies including importing an entire renderer from another browser (servo) and it took a human software engineer to clean it all up.
> According to Perplexity, my AI chatbot of choice, this week‑long autonomous browser experiment consumed in the order of 10-20 trillion tokens and would have cost several million dollars at then‑current list prices for frontier models.
Don't publish things like that. At the very least link to a transcript, but this is a very non-credible way of reporting those numbers.
That implies a throughput of around 16 million tokens per second. Since coding agent loops are inherently sequential—you have to wait for the inference to finish before the next step—that volume seems architecturally impossible. You're bound by latency, not just cost.
The original post claimed they were "running hundreds of concurrent agents":
https://cursor.com/blog/scaling-agents
It was 2,000 concurrent agents at peak.
I'd still be surprised if that added up to "trillions" of tokens. A trillion is a very big number.
16 million a second across 2000 agents would be 8000 tokens per second per agent. This doesn't seem right to me.
I think it's impressive for what it is: this level of complexity being reached by an ai-only workflow. Previously, anything of modest complexity required a lot of human guidance - and even with that had some serious shortcomings and crutches. If you extrapolate that the models themselves, the frameworks for inter-model workflows, the tooling available to the models and the hardware running them are all accelerating - it's not hard to envision where this will get to, and that this is a notable achievement particlarly when comparing with the amount of effort and resources put into what we currently see in a browser engine: many decades and countless millions of man-hours.
Fully agree that the original authors made some unsubstantiated and unqualified claims about what was done - which is sad, because it was still a huge accomplishment as i see it.
Just had my manager submit 3 PRs in a language he doesn’t understand (rust) and hasn’t ran or tested and is demanding quick reviews for hundreds of LoCs. These are tools but some people are clueless..
> These are tools
Just like your manager.
It's only fair you ask an LLM to review it for you.
From an engineer working on this here on HN:
> ...while far off from feature parity with the most popular production browsers today...
What a way to phrase it!
You know, I found a bicycle in the trash. It doesn't work great yet, but I can walk it down a hill. While far off from the level of the most popular supercars today, I think we have made impressive progress going down the hill.
Is there a way to measure the entropy of a piece of software?
Is entropy increasing or decreasing the longer agents work on a code base? If it's decreasing, no matter how slowly, theoretically you could just say "ok, start over and write version 2 using what you've learned on version 1." And eventually, $XX million dollars and YY months of churning later, you'd get something pretty slick. And then future models would just further reduce X and Y. Right?
Maybe they just need to keep iterating.
You would think a CEO with a product that caters to developers would know that everyone was going to clone the repo and check his work. He just squandered a whole lot of credibility.
> He just squandered a whole lot of credibility.
I've yet to see anyone in this space be negatively impacted by their outlandish claims.
They release a new model or add extra sub agents and the slate is wiped clean.
His target reader is management, not developers.
Management already doesn't trust developers in any way. Why would they believe you, who are clearly just trying to save your job, over a big company who clearly is the future!
Or do you trust your management to make the right decision?
I don't think the point was to say "look, AI can just take care of writing a browser now". I think it was to show just how far the tools have come. It's not meant to be production quality, it's meant to be an impressive demo of the state of AI coding. Showing how far it can be taken without completely falling over.
EDIT: I retract my claim. I didn't realize this had servo as a dependency.
This is entirely too charitable. Basically all this proves is that the agent could run in a loop for a week or so, did anyone doubt that?
They marketed as if we were really close to having agents that could build a browser on their own. They rightly deserve the blowback.
This is an issue that is very important because of how much money is being thrown at it, and that effects everyone, not just the "stakeholders". At some point if it does become true that you can ask an agent to build a browser and it actually does, that is very significant.
At this point in time I personally can't predict whether that will happen or not, but the consequences of it happening seem pretty drastic.
I find it hard to believe after running agents fully autonomously for a week you'd end up with something that actually compiles and at least somewhat functions.
And I'm an optimist, not one of the AI skeptics heavily present on HN.
From the post it sounds like the author would also doubt this when he talks about "glorified autocomplete and refactoring assistants".
That is a good point. It is impressive. Llms from two years ago were impressive, llms a year ago were impressive, and from a month ago even more impressive.
Still, getting "something" to compile after a week of work is very different from getting the thing you wanted.
What is being sold, and invested in, is the promise that LLMs can accomplish "large things" unaided.
But they can't, as of yet, they cannot, unless something is happening in one of the SOTA labs that we don't know about.
They can however accomplish small things unaided. However there is an upper bound, at least functionally.
I just wish everyone was on the same page about their abilities and their limitations.
To me they understand conext well (e.g. the task, build a browser doesn't need some huge specification because specifications already exist).
They can write code competently (this is my experience anyway)
They can accomplish small tasks (my experience again, "small" is a really loose definition I know)
They cannot understand context that doesn't exist (they can't magically know what you mean, but they can bring to bear considerable knowledge of pre-existing work and conventions that helps them make good assumptions and the agentic loop prompts them to ask for clarification when needed)
They cannot accomplish large tasks (again my experience)
It seems to me there is something akin to the context window into which a task can fit. They have this compact feature which I suspect is where this limitation lies. Ie a person can't hold an entire browser codebase in their head, but they can create a general top level mapping of the whole thing so they can know where to reach, where areas of improvement are necessary, how things fit together and what has been and what hasn't been implemented. I suspect this compaction doesn't work super well for agents because it is a best effort tacked on feature.
I say all this speculatively, and I am genuinely interested in whether this next level of capability is possible. To me it could go either way.
Maybe so, but I don't think 3 million lines of code to ultimately call `servo.render()` is a great way to demonstrate how good AI coding is.
lmao okay, touché. I did not realize it had servo as a dependency.
It didn't have Servo as a dependency.
Take a look in the Cargo.toml: https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...
I haven't really looked at the fastrender project to say how much of a browser it implements itself, but it does depend on at least one servo crate: cssparser (https://github.com/servo/rust-cssparser).
Maybe there is a main servo crate as well out there, and fastrender doesn't depend on that crate, but at least in my mind fastrender depends on some servo browser functionality.
EDIT: fastrender also includes the servo HTML parser: html5ever (https://github.com/servo/html5ever).
Yeah, but starting with a codebase that is (at least approaching) production quality and then mangling it into something that's very far from production quality... isn't very impressive.
If you want to learn more about the Cursor project directly from the source I conducted a 47 minute interview with Wilson Lin, the developer behind FastRender, last week.
We talked about dependencies, among a whole bunch of other things.
You can watch the full video on YouTube or read my extracted highlights here: https://simonwillison.net/2026/Jan/23/fastrender/
If I was to spend a trillion tokens on a barely working browser I would have started with the source code of Sciter [0] instead. I really like the premise of an electron alternative that compiles to a 5MB binary, with a custom data store based on DyBASE [1] built into the front end javascript so you can just persist any object you create. I was ready to build software on top of it but couldn't get the basic windows tutorial to work.
[0] https://sciter.com/
[1] http://www.garret.ru/dybase.html
anyone remember finding the internet explorer control in windows forms, placing it down, adding some buttons, and telling people you made your own web browser? Maybe this exercise is eternal just in different forms
People thinking this does not matter just because the code is awful, it used dependencies, or whatever, are missing the point.
6 months ago with previous models this was absolutely impossible. One of the biggest limitations of LLMs is their difficulty with long tasks. This has been steadily improving and this experiment was just another milestone. It will be interesting a year from now to test how much better new models fare at this task.
Our modern economy is nearly entirely built on useless bullshit, this is just what it looks like when the ouroboros starts devouring its own tail. It doesn't matter that the product doesn't work; the hype is the product. In our collective nihilism, we have productized faith itself.
I mean, maybe they should have started simple and slowly iterated.
etc etc.Every single high-profile story that shows up on the feeds about how LLMs are just about there and coders are doomed, if you actually read them and are a programmer, seems like a story about how LLMs are bad and generate trash code that rarely even looks superficially good and definitely doesn't work.
There was a story going around about LLMs making minesweeper clones, and they were all terrible in extremely dumb ways. The headline wasn't obvious, so I thought the take that people were getting from it is that AI is making the same dumb mistakes that it was making a year ago. Nope. It was people ranting about how coders are going to be out of a job next week. Meanwhile, none of them can do a minesweeper clone with like 50 working examples online, maybe 8 things you have to do right to be perfect, and 9000 articles about minesweeper and even mathematical papers about minesweeper to make everything about the game and its purpose perfectly clear. And then AI generates buttons that don't do anything and timers that don't stop.
Was that a while ago? Minesweeper's pretty easy.
Claude Opus 4.5: "Build minesweeper as an artifact, don't use react"
(Then "Fix it to work on mobile where right click isn’t a thing")
Play it here: https://tools.simonwillison.net/minesweeper
Transcript here: https://claude.ai/share/2d351b62-a829-4d81-b65d-8f3b987fba23
grifters gonna grift
FTA:
> tools like Cursor can be genuinely helpful as glorified autocomplete and refactoring assistants
That suggests a fairly strong anti-AI bias by the author. Anyone who thinks that this is all AI coding tools are today is not actually using them seriously.
That's not to say that this exercise wasn't overhyped, but a more useful, less biased article that's not trying to push an agenda would look at what went right, as well as what went wrong.
No, it suggests the sarcasm that is the Registers in house style. See the page tagline.... "Biting the hand that feeds IT"
AI will never be able to create a browser, just as AI was never able to defeat a chess grandmaster.
Yeah that's one of the real takeaways from this. This will improve over time. People seem to get so put off by hype that they forget there can be things of real significance underneath it. You could make a long list of what's amazing and promising about this "implement a browser" task, despite all its shortcomings.
So grifting is okay, just because someday the grift might come true?