> This isn’t a todo list with hardcoded arrays. It’s a real app with database persistence, complex state management, and the kind of interactions you’d actually build for a real product.
Can you also tell ChatGPT to fix the layout so the table just above this message is fully visible without horizontal scrolling?
The user who posted that also post this thread's link yesterday, as well as many others. The account seems to be karma farming with AI-generated articles.
>> Marko delivers 12.6 kB raw (6.8 kB compressed). Next.js ships 497.8 kB raw (154.5 kB compressed). That’s a 39x difference in raw size that translates to real seconds on cellular networks.
Sorry, it isn't 2006, cellular networks aren't spending "seconds" in the difference between 13kB and 500kB.
Payload size can matter, but it's complete nonsense that 500kB would translate to "real seconds".
Just spotted this section:
>> The real-world cost: A 113 kB difference at 3G speeds (750 kbps) means 1.2 seconds for download plus 500ms to 1s for parse/execution on mobile CPUs. Total: 1.5 to 2 seconds slower between frameworks.
3G is literally being decommissioned, and 3G isn't 750kbps, it's significantly faster than that.
> On first glance it seems very legit
Yes, that's exactly the danger of AI slop. It's very plausible, very slick and very easy to digest. It also frequently contains unchecked errors without any strong signals that would traditionally go along with that.
I can attest to the differences mentioned, having visited in many cities around the world. Assuming your own local performance reflects that of the rest of the world is not accurate.
The article cites also the use case, real estate agents. They also struggle at times with bad connection issues it seems. And with a bad connection average websites do take seconds to load for me.
I'm saying that the impact of dropped packets and poor latency falls much worse on sites that have multiple connections and dozens of files to download than a single bundle.
Also in those circumstances, the 13kB would also take "seconds".
The situation described, where the 13kB file takes milliseconds but the 500kB file takes seconds, is what is unrealistic. It's an invention of an LLM.
Chances are two different 13kB files would be far worse in those circumstances than a single 500kB file.
I don't know why I'm still answering this thread, because it's clear I'm not being understood, and this is all arguing over a flagged AI slop article that no-one wrote.
Dismissing the bandwidth issue just makes you seem out of touch and stubborn. There’s a reason HN is one of my favorite sites when I’m on LTE. Payload size matters.
I question it, because I live somewhere rural with bad connection and travel frequently around europe, where I often experience bad connection outside of cities, so I do value lightweight pages like the article authors propose as a metric. Heavy weight pages I don't even bother trying to load in some areas.
"Show me a real world example of a single payload 500kB taking seconds longer than 13kB. It's not realistic."
And my only comment towards this is, please go out to see for yourself.
Also maybe take into account, the bloated website is not the only thing using the device connection. Messager messages syncing in the background, ..
In the summary at the top they also use a different smallest compressed size: "The real differentiator? Bundle sizes range from 28.8 kB to 176.3 kB compressed."
That's why I stopped reading at your first quote, it didn't fit with the summary and there's no point reading a bunch of numbers and wondering which are made up.
This is great write up. I especially appreciate the focus on mobile, because I find it's often overlooked, even though it's dominant device to access the web. The reality of phones is brutal and delivering a good experience for most users in SPA-style archictecture is pretty hard.
"Slowness poisons everything."
Exactly. There's nothing more revealing than seeing your users struggle to use your system, waiting for the content to load, rage clicking while waiting for buttons to react, waiting for the animations to deliver 3 frames in 5 seconds.
Engineering for P75 or P90 device takes a lot of effort, way beyond what frameworks offer you by default. I hope we'll see some more focus on this from the framework side, because I often feel like I have to fight the framework to get decent results - even for something like Vue, which looks pretty great in this comparison.
As somebody using Svelte for a real production application, I can only 100% agree with their recommendations regarding Svelte because of the overall dev experience is unmatched. It just feels right. Easy. Simple. And I'm not even considering performance here as another benefit.
I usually make the analogy of a video game, where you can pick the difficulty. Svelte/SvelteKit is working in the "easy" difficulty level. You can achieve the same end result and keep your sanity (and your hair).
I've been using Svelte's custom elements (web components) to make components that slot into pages on an existing .net / alpine.js site. It's been a great dev experience and results in really portable components. Each component is it's own bundle (achieved via separate vite configs - you can also organise to bundle groups of components work together). Each of the tools in the tools section is a svelte custom element https://www.appsoftware.com/tools/utilities/calculators
Turbopack helps, ever used C, C++, Rust, Scala, Swift in large scale projects?
Back in 1999 - 2001, every time I wanted to do a make clean; make all in a C based product (actuall TCL with lots of C extensions), it took at least one hour build time.
I would choose vue because you can still get paid for it but react is king by jobs. If you're playing in the hobby space then between liveview, datastar etc, there is plenty of cool stuff moving the needle. React is nice and simple IMHO which is why average devs like me enjoy it.
Can you give some examples? I feel like React is still pretty much just React, having developed with it for a decade now. Hooks was the only meaningful API (surface) change, no?
I think this is the reason why React feels normal to you. But as someone coming into it fresh, React felt like there were always 4 different ways to do the same thing and 3 of them are wrong because they built a new API/there are more idiomatic ways to accomplish the same thing now. If you have a decade of experience, then you probably do most things the right/obvious way so don't even notice all the incorrect ways/footguns that React gives you.
If you're coming into it in 2025, it's even simpler. Just ignore the SSR stuff which Vercel are pushing and you're good. A lot of the path has been smoothed out over the years to make it an ideal place to start today.
I feel like the introduction of React Compiler was a pretty big change, too?
The article seems to make the bloat self-evident by comparing the load times of identical apps and finding React magnitudes slower.
To be fair, I haven't written in React for a few years now. I reached for Svelte with the last two apps I built after using React professionally for 4 years. I was expecting there to be a learning curve and there just... wasn't? It was staggering how little I had to think about. Even something as small as not having to write in JSX (however normalized I was to writing in it) really felt meaningful once I took a step back and saw the forest for the trees.
I dunno. I just remember being on the interview circuit and asking engineers to tell me about useCallback, useEffect, useMemo, and memo and how they're used, how something like console.log would fair in relation to them, when to include/exclude arguments from memoization arrays, etc.. and it was pretty easy to trip a lot of people up. I think the introduction of the compiler is an attempt to mitigate a lot of those pains, but newer frameworks designed with those headaches in mind from the start rather than mitigating much later and you can feel it.
I rolled my eyes when hooks came out and never used it again besides for work, so not really. All the frameworks on the planet and facebook is still a heaping pile of dog shit. I was spoiled by Vue's lifecycle methods and then Svelte and it was impossible to go back.
Maybe hooks are cool but the same code written in react vs vue vs svelte or something else is always easier on the eyes and more readable. Dependency arrays and stale closures are super annoying.
Sorry but I really hate React. I've dealt with way too many shit codebases. Meanwhile working in vue/svelte is a garden of roses even if written by raw juniors.
This is going to sound selfish, but I liked being a solo React Typescript developer. My colleagues worked on UI/UX, back-end, DB, specs, etc, but I was responsible for the React code and I could just iterate and iterate without having to submit every change as a pull request.
Now with Laravel, Blade and JQuery the IDE support is low but everything is easy enough and we work as a team and do merge requests and it's a chill job even if it's full stack.
Hilarious its come full circle again. React was a breath of fresh air for fe's back in the day, and now we're back at jQuery! Why the switch from React to Laravel/Blade/JQuery?
>I liked being a solo React Typescript developer.
Being a solo FE rocks. Everyone thinks you're a magician. The worst is FE-by-committee where you get 'full-stack' devs but really they're 99% postgres and 1% html.
In our small firm, we did a review of the usual suspects when deciding which of the big players would be the right horse to bet on for the future when planing to rewrite our core application.
We ended up with Vue vs. Svelte and landed on Vue/Nuxt since we agreed they have the most intuitive syntax for us, and it seemed like the one with the best trajectory, technologically speaking.
That was one year ago. It's not moving as fast as I would hope, but I still think Vue/Nuxt is a better choice than React at least. This article seems to support this somewhat.
Also, I did a review (with the help of all the big LLMs), and they seem to agree that Vue has the syntax and patterns that are best suited for agentic coding assistance.
The wins with regards to "First Contentful Paint" and "size" is not the most important. We just trust the Vue community more. React seems like a recipe for a bloated bureaucratic mess. Svelte still looks like a strong contender, but we liked the core team of Vue a lot, and most of us just enjoy Vue/Nuxt syntax/patterns better.
A big advantage with Vue is also that it has options and composition API, so if one feels janky you can still try the other.
I've tried moving away from Vue just to test some other frameworks but none have given me such an easy way to manage state, reactivity, modularity... I always come back to it.
The only way this makes sense is if you are looking at the Vue 2 GitHub page.
The new Vue 3 is at 52k stars on GitHub, has multiple releases per month, and is ranked 7th A-tier framework in the "State of JS" framework rankings. It holds second place in frontend framework "experience with" and "sentiment," just behind React, with 6+ million weekly downloads on socket.dev and npmstats, ahead of both Angular and Svelte. So, I guess we have a different definition of "dead."
This is a really good article. It’s not my bailiwick, but it must be extremely useful for folks that work in this space.
> When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I liked reading that. It’s actually surprising how few developers think that way.
> Mobile is the web
That’s why.
I know many people that don’t own a computer, at all, but have large, expensive phones. This means that I can’t count on a large PC display, but I also can reasonably expect a decent-sized smaller screen.
I’ve learned to make sure that my apps and sites work well on high-quality small screens (which is different from working on really small screens).
The main caveat, is the quality of the network connection. I find that I need to work OK, if the connection is dicey.
> When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I've been there myself as a Dev and later on as a manager. You have to really watch out not getting locked into local minima here. In most cases its not bundle size that wins this but engineering an app that can gracefully work offline, either by having the user manually pre-load data or by falling back to good caches.
Ignoring the content of the post for a second (which IMO was excellent), the quality of the writing here is remarkable. This is a dry technical topic at heart and yet i enjoyed reading that entire report. It was as informative as i could hope for whilst still being engaging.
It’s 10,000 words and a curious mixture of dense and sparse. There’s quite a bit of duplication (especially of figures), a fair bit of circumlocution in the narrative sections, and a lot of meaninglessly precise figures, half of which should have been omitted altogether. I am confident it could be significantly improved by a hard cap of 5,000 words, and suspect even 2,000 words could still be better (though 1,000 would definitely be too short to convey it all). Even apart from that, it definitely needed a table of contents, to set expectations.
As a general challenge to people: write your article, then see if you can halve its length without losing much. If it felt too easy, repeat the process! There’s a family of well-known quotes that amount to “sorry for writing a long letter, I didn’t have time to write a short letter”. Concise expression is not the easiest, but very valuable. Many a 100-page technical book can be improved by reduction to a one-page non-prose overview/cheat sheet (perhaps using diagrams and tables, but consider going more freeform like you might on a whiteboard) plus a ten page abridged version.
This isn't just poor writing, it's ChatGPT-padded slop.
But the same is true for the content itself, no business is paying you to actually build the same app 10x, especially so if it's something as trivial as a kanban board.
They'd comfortably pay for 10 AI-assisted versions. It's a trivial demo app so that implementing it 10 times is feasible - it's just to learn what to build their main app in.
I wouldn't measure how good/fast/performant a library is looking at the results of the very first LLM attempt at doing a trivial task using that library. If you don't know the libraries well enough to spot some improvements the LLM missed, the only thing you're judging is either how sane the defaults are or how good the LLM is at writing performant code using that library, none of which are equivalent to how good the library is.
Also, performing well in a prototype scenario is very different than performing well in production-ready scenario with a non-trivial amount of templates and complex operations. Even the slowest SSGs perform fast when you put three Markdown posts and one layout in them, but then after a few years of real-world usage you end up in a scenario where the full build takes about half an hour.
Kinda cool that you can do that in an afternoon, but absolutely useless as a benchmark of anything.
Eventually english textbooks are going to start including this isn't... it's pattern because it's so prevalent in ai slop. I close anything I read now at the first sign of it.
Before starting new projects I would always do research like this and try new things. But I’ve stopped looking at what is out there. I have landed on Django/React(vite). I have mastered this and can go from idea to app running in production in a matter of hours. I know there are better, faster, and more modern alternatives. But I just don’t care anymore. Maybe I’m just web framework jaded. I rather learn something else than look through the docs of yet another web framework.
To be honest, as long as your app isn’t doing something crazy complex, it’s going to be fast enough for most people even on the slowest stack. I wouldn’t worry about it, personal efficiency is way more important most of the time I’d say.
At the end of the day there have been a lot of new things in web development but none of them are of such a significance that you’re missing out on anything by sticking with what works. I personally just like to go with a mature backend framework (usually Laravel or Django) and minimal JS on the frontend. I’ve tried many of the shiny new libraries but have not seen much reason to switch over.
I particularly like that (JSX aside) it's just JavaScript, not a separate language with its own compiler like Svelte (and by the sounds of it Marko, which I hadn't heard of before). You can split your app into JS modules, and those can all use Solid signals, even the internal bits that don't have their own UI.
Interesting to see Marko and Solid topping the performance metrics. Ryan Carniato* was a core team member of Marko and started Solid. I wouldn't be surprised if SolidStart can eventually lower its bundle size further.
The article is a bit disappointing in that it focuses too much on bundle size. Bundle size is important for sure, especially in rural areas with poor mobile signal, but time-to-interactive is imho more important, and that's where resumable frameworks like qwik and marko6 shine
Solid is great for raw rendering speed, but it hydrates just like react (unless you use an islands framework on top like astro which has its own limitations), while qwik and marko are resumable out of the box
I prefer to use whatever I'm more comfortable with than something that is measurably the fastest horse in the stable. Trading dev time, skill and comfort for few kb of memory and few ms of speed seems pointless to me.
By the way, my "horse" of choice is Quasar(based on Vue) and has been for years now.
Thanks for posting, a lot of effort went into that and I think the quality shines through in the write up.
I write pretty lean HTML/vanilla JS apps on the front end & C#/SQL on the backend; and have had great customer success on mobiles with a focus on a lot of the metrics the author hammers home.
I believe the biggest performance hit lives in je inability to force reload a cached file with js (or even html(!)).
Setting a header only works if you know exactly when you are going to update the file. Except from highly dynamic or sensitive things this is never correct.
You can add ?v=2 to each and every instance of an url on your website. Then you update all pages which is preposterous and exactly what we didn't want. As a bonus ?v=1 is not erased which might also be just what you didn't want.
This is a solved problem. All modern javascript bundlers append a hash to the filename, so even if cached indefinitely the js that hits the browser will update when it has changed as the url will change.
There are also other solutions if you need to preserve the url that are cleaner than appending a query string, like etags
The standard solution is to have small top-level HTML files with short expiration (or no caching at all), then all the other assets (CSS, JS, images) have content-hashed filenames and are cached indefinitely.
I greatly appreciated this article and have found the data very useful - I have shared this with my business partner and we will use this information down the road when we (eventually) get around to migrating our app from Angular to something else. Neither of us were surprised to see Angular at the bottom of the league tables here.
Now, let's talk about the comments, particularly the top comment. I have to say I find the kneejerk backlash against "AI style" incredibly counter-productive. These comments are creating noise on HN that greatly degrades the reading experience, and, in my humble opinion, these comments are in direct violation of all of the "In Comments" guidelines for HN: https://news.ycombinator.com/newsguidelines.html#comments
Happy to change my mind on this if anyone can explain to me why these comments are useful or informative at all.
I am the only one shocked that no comparison or test or thinking of native development? Web dev are this closed to other languages? I came here for this kind of comparison because of the article. headline
It's not about being closed to other languages, it's about being economically pragmatic in many, many cases.
As shown in the article, you can build ONCE an app that loads in milliseconds by just providing an url to any potential customer. It works on mobile and on desktop, on any operating system.
The native alternative requires:
- Multiple development for any platform you target (to be widely used you need *at least* ios, android, macOS and windows.)
- Customers are required to download and install something before using your platform, creating additional friction.
And all of this to obtain at most 20-30ms better loading times?
There are plenty of cases where native makes sense and is necessary, but most apps have very little to gain at the cost of a massive increase in development resources.
I'm not exactly sure why "big" but it's slow because it has worse change tracking and rendering model, which requires you to do more work to figure out what needs to be updated, unless you manually opt-out when you know. Solid, Vue and other signals based frameworks have granular change tracking, so they can skip a lot of that work.
But this mostly applies to subsequent re-renders, while things mentioned in the article are more about initial render, and I'm not exactly sure why does React suffer there. I believe React can't skip VDOM on the server, while Vue or Solid use compiled templates that allow them to skip that and render directly to string, so maybe it's partially that?
React isn't slow. It can be pretty fast (thanks to hooks like useTransition or useOptimistic, and now React Compiler, etc.) - it's just that it takes a lot of learning and work to use React correctly. Some people don't like that, and that's why other frameworks with different trade-offs exist.
The other thing is that React is too big in terms of kBs of JavaScript you have to download and then parse (and often, thanks to great React ecosystem, you use many other libraries). But that's just another trade-off: it's the price you pay for great backwards compatibility (e.g. you can still use React Class components, you don't have to use hooks, etc.).
I want to preface this by saying I have nothing against React, I have used it professionally for a couple years and it's fine and perfectly good enough.
That being said React is slow. That is why you need useTransition, which is essentially manual scheduling (letting React know some state update isn't very important so it can prioritise other things) which you don't need to do in other frameworks.
useOptimistic does not improve performance, but perceived performance. It lets you show a placeholder of a value while waiting for the real computation to happen. Which is good, you want to improve perceived performance and make interactions feel instant. But it technically does not improve React's performance.
It is pretty established at this point that React has (relative) terrible performance. React isn't successful because it's a superior technology, it's successful despite being an inferior technology. It's just really difficult to beat an extremely established technology and React has a huge ecosystem, so many companies depend on it that the job market for it is huge, etc.
As to why it is slow, my knowledge is super up-to-date (haven't kept up that well with recent updates), but in general the idea is:
- The React runtime itself is 40 kB so before doing anything (before rendering in CSR or before hydrating in SSR) you need to download the runtime first.
- Most frameworks have moved on to use signals to manage state updates. When state change, observers of that state will be notified and the least amount of code will be run before updating the DOM surgically. React instead re-executes the code of entire component trees, compares the result with the current DOM and then applies changes. This is a lot more work and a lot slower. Over time techniques have been developed in React to mitigate this (Memoization, React Compiler, etc.), but it still does a lot more work than it needs to, and these techniques are often not needed in other frameworks because they do a lot less work by default.
The js-framework-benchmark [1] publishes benchmarks testing hundreds of frameworks for every Chrome release if you're interested in that.
> It is pretty established at this point that React has (relative) terrible performance.
> it is slow
You're not answering my question, just adding some more feelings.
> The React runtime itself is 40 kB
React is < 10 kb compressed https://bundlephobia.com/package/react@19.2.0 (add react-dom to it). That's not really significative according to the author's figures, the header speaks about up "to 176.3 kB compressed".
> Most frameworks have moved on to use signals to manage state updates. When state change
This is not kilobytes or initial render times, but performance in rendering in a highly interactive application. They would not impact rendering a blog post, but rendering a complex app's UI. The original blog post does not measure this, it's out of scope.
I don't know how bundlephobia calculates package size, and let me know if you're able to reproduce them in a real app. The simplest Vite + React app with only a single "Hello, World" div and no dependencies (other than react and react-dom), no hooks used, ships 60+ kB of JS to the browser (when built for production, minified and gzipped).
Now the blog post is not just using React but Next.js which will ship even more JS because it will include a router and other things that are not a part of React itself (which is just the component framework). There are leaner and more performant React Meta-Frameworks than Next.js (Remix, TanStack Start).
> This is not kilobytes or initial render times, but performance in rendering in a highly interactive application
True, but it's another area where React is a (relative) catastrophe.
The large bundle size on the other hand will definitely impact initial render times (in client-side rendering) and time-to-interactive (in SSR), because it's so much more JS that has to be parsed and executed for the runtime before even executing your app's code.
EDIT: It also does not have to be a highly interactive application at all for this to apply. If you only change a single value, that is read in a component deep within a component tree you will definitely feel the difference, because that entire component tree is going to execute again (even though the resulting diff will show that only that deeply nested div needs to be updated, React has no way of knowing that beforehand, whereas signal-based framework do)
And finally I want to say I'm not a React hater. It's totally possible to get fast enough performance out of React. There are just more footguns to be aware of.
This is a great comparison, but it depends so much on what sort of website or web app you are building. If you are building a content site, with the majority of visitors arriving without a hot cache bundle size is obviously massively important. But for a web app, with users regularly visiting, it's somewhat less important.
As ever on mobile it's latency, not bandwidth, that's the issue. You can very happily transfer a lot of data, but if that network is in your interactive hot path then you will always have a significant delay.
You should optimise to use the available bandwidth to solve the latency issues, after FCP. Preload as much data as possible such that navigations are instant.
Let’s be honest: “desktop-only” is usually an excuse to skip performance discipline entirely
No, it is excuse not to invest money in places where users won't pay.
For questions about mobile - yeah we get requests for showing it on mobile but app in app store is hard requirement, because of discoverability. People know how to install app from app store and then they have icon. Making PWA icon is still too much work for normal people.
I would need "add to home screen" button in my website that I could have user making icon with single click, then I could go with PWA.
> when I built the first implementations and started measuring, something became clear: the issues I was seeing with Next.js weren’t specific to Next.js. They were fundamental to React’s architecture.
So here some obscure Next.js issues magically become fundamental React architecture issues. What are these? Skill issues?
Comparing something like next.js to other frameworks doesn’t make much sense anymore given that most webdevs choose DX and easy deployment above anything else. Vercel’s growth is proof of that.
I'm surprised there's no mention of Flutter. If the goal is mobile performance, Flutter would be top of mind - you can build to both native apps and web.
All of them can, but you get most benefit of a full-stack javascript framework if you are indeed running server js. But you can build statically in any of them (assuming you are not using any server-only features) and deploy as plain html/js.
150kb downloads almost instantly, even on 3G. Most websites have an image bigger than that somewhere on their homepage. It's not worth changing how I work.
JS can be 100x or even 1000x times more expensive to process than images. JS also blocks the main thread, while images can be processed in the background (and on GPU).
Your attitude is exactly why our supercomputers struggle to display even the simplest things with any kind of performance, and why pure text takes multiple seconds to appear
> Here’s where this gets bigger than framework choice. When you ship a native app to the App Store or Google Play instead of building a web app, you’re not just making a technical decision. You’re accepting a deal that would’ve been unthinkable twenty years ago. Apple and Google each take up to 30% of every transaction (with exceptions depending on program and category). They set rules. They decide what you can ship. They can revoke your access tomorrow with no recourse. You have no alternative market. You can’t even compete on price because the fee is baked into many transactions.
Interesting, in my case I'm using Capacitor (since I need bluetooth and it's not well supported across browsers), and I want the app to be able to work offline too for some features, so I don't really care about the bundle size (it ships with the native app) and I prefer SPA than MPA so I can build it once and ship it with the code
(I'll be that guy since the article emphasizes a good mobile web experience so hard)
You might want to fix your horizontal scroll on mobile. I should basically never have a full page horizontal scrollbar on a page that is mostly just text.
Ugh. That thinking is what gets you things like mandatory login via apps for your desktop. And not every application makes sense on a phone. And some Web Applications just require low latency high bandwidth internet to work properly.
> some Web Applications just require low latency high bandwidth internet to work properly.
But the vast majority do not. And this haranguing is an opportunity / defensible position to put more efforts and resources into performances. If nothing else, think of it as a Trojan horse to make software suck less.
>If nothing else, think of it as a Trojan horse to make software suck less.
My experience has been that the proliferation of mobile devices has made my desktop experience consistently worse and I struggle to come up with an example where it didn't.
The guy is such a web zealot that he refuses to make the sensible engineering tradeoff that favors speed and offline capabilities over platform ubiquity. Most sane people would write a native app for this sort of thing if money was on the line.
Am I missing something here? The mobile SPA app can be deployed using tools like capacitor to a device and the framework along with all static content is loaded into the app bundle. In such case it makes no (realistic) difference which framework is selected and it matters more how the background/slow transfers are handled with data-only API requests, possibly with hosted images. With the background workers PWA can be built as well, streamlining installation even more.
Does that involve shipping a native wrapper for your web app?
If so, you have the extra cost, effort and bureaucracy of building and deploying to all the different app stores. Apple's App Store and Google Play each have various annoyances and limitations, and depending on your market there are plenty of other stores you might need to be in.
Sometimes you do need a native or native-feeling app, in which case a native wrapper for JS probably is a good idea, other times you want something lightweight that works everywhere with no deployment headaches.
As much as I agree with app deployment headaches, apps provide something a website cannot (except PWA) - ability to do stuff offline, log and register data which can be uploaded when connection is re-established. When talking about user experience - launching the app, selecting new -> quote -> entering details -> save -> locking the phone without worrying or waiting, knowing that it will eventually get uploaded, is much more convenient than walking with the phone around the property to get better reception to even load the new quote page.
UX matters, and user does not care if the native wrapper or 500kB of js is there or not, as long as the job is done conveniently and fast.
> This isn’t a todo list with hardcoded arrays. It’s a real app with database persistence, complex state management, and the kind of interactions you’d actually build for a real product.
Can you also tell ChatGPT to fix the layout so the table just above this message is fully visible without horizontal scrolling?
Yeah the writing is obvious ChatGPT-slop sadly.
Edit: Related post on the front page: https://news.ycombinator.com/item?id=45722069
The user who posted that also post this thread's link yesterday, as well as many others. The account seems to be karma farming with AI-generated articles.
https://news.ycombinator.com/item?id=45724022
Can you guys share actual errors in the article, that indicate real slop?
On first glance it seems very legit and personally I would be very hesistant judging something GPT slop based on some writing style.
How about this?
>> Marko delivers 12.6 kB raw (6.8 kB compressed). Next.js ships 497.8 kB raw (154.5 kB compressed). That’s a 39x difference in raw size that translates to real seconds on cellular networks.
Sorry, it isn't 2006, cellular networks aren't spending "seconds" in the difference between 13kB and 500kB.
Payload size can matter, but it's complete nonsense that 500kB would translate to "real seconds".
Just spotted this section:
>> The real-world cost: A 113 kB difference at 3G speeds (750 kbps) means 1.2 seconds for download plus 500ms to 1s for parse/execution on mobile CPUs. Total: 1.5 to 2 seconds slower between frameworks.
3G is literally being decommissioned, and 3G isn't 750kbps, it's significantly faster than that.
> On first glance it seems very legit
Yes, that's exactly the danger of AI slop. It's very plausible, very slick and very easy to digest. It also frequently contains unchecked errors without any strong signals that would traditionally go along with that.
I can attest to the differences mentioned, having visited in many cities around the world. Assuming your own local performance reflects that of the rest of the world is not accurate.
Hm. Have you ever spend time away from the city?
The article cites also the use case, real estate agents. They also struggle at times with bad connection issues it seems. And with a bad connection average websites do take seconds to load for me.
Questioning if I spend time away from a city is patronising nonsense.
Websites taking seconds to load in bad mobile reception is usually down to latency and handshaking, not raw bandwidth.
Show me a real world example of a single payload 500kB taking seconds longer than 13kB. It's not realistic.
I can take you for a ride in the subway I use to commute to work, where internet is sluggish for a section of it.
I can also show you how slow it is when I visit the countryside and the connection is not good.
Or when I take a very crowded train to another city/country and have to share the wi-fi while traveling in a non-metropolitan area.
Or when I run out of pre-paid credits and I get bumped into low speed mode and the provider's page takes several minutes to load.
I don't even know why I answer to this. Because for sure this is all my fault and I'm the one "holding it wrong".
I'm not denying bad internet exists.
I'm saying that the impact of dropped packets and poor latency falls much worse on sites that have multiple connections and dozens of files to download than a single bundle.
Also in those circumstances, the 13kB would also take "seconds".
The situation described, where the 13kB file takes milliseconds but the 500kB file takes seconds, is what is unrealistic. It's an invention of an LLM.
Chances are two different 13kB files would be far worse in those circumstances than a single 500kB file.
I don't know why I'm still answering this thread, because it's clear I'm not being understood, and this is all arguing over a flagged AI slop article that no-one wrote.
> Also in those circumstances, the 13kB would also take "seconds".
Yeah but a couple seconds I can wait. A few minutes not realistically unless it’s something really important.
Dismissing the bandwidth issue just makes you seem out of touch and stubborn. There’s a reason HN is one of my favorite sites when I’m on LTE. Payload size matters.
I question it, because I live somewhere rural with bad connection and travel frequently around europe, where I often experience bad connection outside of cities, so I do value lightweight pages like the article authors propose as a metric. Heavy weight pages I don't even bother trying to load in some areas.
"Show me a real world example of a single payload 500kB taking seconds longer than 13kB. It's not realistic."
And my only comment towards this is, please go out to see for yourself.
Also maybe take into account, the bloated website is not the only thing using the device connection. Messager messages syncing in the background, ..
In the summary at the top they also use a different smallest compressed size: "The real differentiator? Bundle sizes range from 28.8 kB to 176.3 kB compressed."
That's why I stopped reading at your first quote, it didn't fit with the summary and there's no point reading a bunch of numbers and wondering which are made up.
Also
> This isn’t just an inconvenience. It’s technofeudalism.
There are so many of these in the article. It's like a spit to the face
Come full circle by feeding the article back to your favorite LLM and ask it to TL;DR it for you.
This is great write up. I especially appreciate the focus on mobile, because I find it's often overlooked, even though it's dominant device to access the web. The reality of phones is brutal and delivering a good experience for most users in SPA-style archictecture is pretty hard.
"Slowness poisons everything."
Exactly. There's nothing more revealing than seeing your users struggle to use your system, waiting for the content to load, rage clicking while waiting for buttons to react, waiting for the animations to deliver 3 frames in 5 seconds.
Engineering for P75 or P90 device takes a lot of effort, way beyond what frameworks offer you by default. I hope we'll see some more focus on this from the framework side, because I often feel like I have to fight the framework to get decent results - even for something like Vue, which looks pretty great in this comparison.
As somebody using Svelte for a real production application, I can only 100% agree with their recommendations regarding Svelte because of the overall dev experience is unmatched. It just feels right. Easy. Simple. And I'm not even considering performance here as another benefit.
I usually make the analogy of a video game, where you can pick the difficulty. Svelte/SvelteKit is working in the "easy" difficulty level. You can achieve the same end result and keep your sanity (and your hair).
I've been using Svelte's custom elements (web components) to make components that slot into pages on an existing .net / alpine.js site. It's been a great dev experience and results in really portable components. Each component is it's own bundle (achieved via separate vite configs - you can also organise to bundle groups of components work together). Each of the tools in the tools section is a svelte custom element https://www.appsoftware.com/tools/utilities/calculators
Can we build the elements as part of light dom? Do they call their destructor when we navigate away?
I will keep using Next.js, because that is what SaaS vendors support on their extension SDKs, and I have better things to do than build an ecosystem.
Alternatives are great for those without these kinds of constraints.
In which case, I rather use traditional Java and .NET frameworks with minimal JavaScript, if at all.
How do you deal with the horrendously slow on-the-fly compile times in dev mode?
I wonder how anyone gets any work done when they have to wait 10 seconds on every page load on a M3 Macbook Air
Turbopack helps, ever used C, C++, Rust, Scala, Swift in large scale projects?
Back in 1999 - 2001, every time I wanted to do a make clean; make all in a C based product (actuall TCL with lots of C extensions), it took at least one hour build time.
I would choose vue because you can still get paid for it but react is king by jobs. If you're playing in the hobby space then between liveview, datastar etc, there is plenty of cool stuff moving the needle. React is nice and simple IMHO which is why average devs like me enjoy it.
>React is nice and simple IMHO which is why average devs like me enjoy it.
Maybe years ago. Now it's a bloated beast.
Can you give some examples? I feel like React is still pretty much just React, having developed with it for a decade now. Hooks was the only meaningful API (surface) change, no?
> having developed with it for a decade now
I think this is the reason why React feels normal to you. But as someone coming into it fresh, React felt like there were always 4 different ways to do the same thing and 3 of them are wrong because they built a new API/there are more idiomatic ways to accomplish the same thing now. If you have a decade of experience, then you probably do most things the right/obvious way so don't even notice all the incorrect ways/footguns that React gives you.
If you're coming into it in 2025, it's even simpler. Just ignore the SSR stuff which Vercel are pushing and you're good. A lot of the path has been smoothed out over the years to make it an ideal place to start today.
I feel like the introduction of React Compiler was a pretty big change, too?
The article seems to make the bloat self-evident by comparing the load times of identical apps and finding React magnitudes slower.
To be fair, I haven't written in React for a few years now. I reached for Svelte with the last two apps I built after using React professionally for 4 years. I was expecting there to be a learning curve and there just... wasn't? It was staggering how little I had to think about. Even something as small as not having to write in JSX (however normalized I was to writing in it) really felt meaningful once I took a step back and saw the forest for the trees.
I dunno. I just remember being on the interview circuit and asking engineers to tell me about useCallback, useEffect, useMemo, and memo and how they're used, how something like console.log would fair in relation to them, when to include/exclude arguments from memoization arrays, etc.. and it was pretty easy to trip a lot of people up. I think the introduction of the compiler is an attempt to mitigate a lot of those pains, but newer frameworks designed with those headaches in mind from the start rather than mitigating much later and you can feel it.
I rolled my eyes when hooks came out and never used it again besides for work, so not really. All the frameworks on the planet and facebook is still a heaping pile of dog shit. I was spoiled by Vue's lifecycle methods and then Svelte and it was impossible to go back.
Maybe hooks are cool but the same code written in react vs vue vs svelte or something else is always easier on the eyes and more readable. Dependency arrays and stale closures are super annoying.
Sorry but I really hate React. I've dealt with way too many shit codebases. Meanwhile working in vue/svelte is a garden of roses even if written by raw juniors.
This is going to sound selfish, but I liked being a solo React Typescript developer. My colleagues worked on UI/UX, back-end, DB, specs, etc, but I was responsible for the React code and I could just iterate and iterate without having to submit every change as a pull request.
Now with Laravel, Blade and JQuery the IDE support is low but everything is easy enough and we work as a team and do merge requests and it's a chill job even if it's full stack.
Hilarious its come full circle again. React was a breath of fresh air for fe's back in the day, and now we're back at jQuery! Why the switch from React to Laravel/Blade/JQuery?
>I liked being a solo React Typescript developer.
Being a solo FE rocks. Everyone thinks you're a magician. The worst is FE-by-committee where you get 'full-stack' devs but really they're 99% postgres and 1% html.
In our small firm, we did a review of the usual suspects when deciding which of the big players would be the right horse to bet on for the future when planing to rewrite our core application.
We ended up with Vue vs. Svelte and landed on Vue/Nuxt since we agreed they have the most intuitive syntax for us, and it seemed like the one with the best trajectory, technologically speaking.
That was one year ago. It's not moving as fast as I would hope, but I still think Vue/Nuxt is a better choice than React at least. This article seems to support this somewhat.
Also, I did a review (with the help of all the big LLMs), and they seem to agree that Vue has the syntax and patterns that are best suited for agentic coding assistance.
The wins with regards to "First Contentful Paint" and "size" is not the most important. We just trust the Vue community more. React seems like a recipe for a bloated bureaucratic mess. Svelte still looks like a strong contender, but we liked the core team of Vue a lot, and most of us just enjoy Vue/Nuxt syntax/patterns better.
A big advantage with Vue is also that it has options and composition API, so if one feels janky you can still try the other. I've tried moving away from Vue just to test some other frameworks but none have given me such an easy way to manage state, reactivity, modularity... I always come back to it.
vue is better, the problem is it's been dead for more than 3 years now.
The only way this makes sense is if you are looking at the Vue 2 GitHub page. The new Vue 3 is at 52k stars on GitHub, has multiple releases per month, and is ranked 7th A-tier framework in the "State of JS" framework rankings. It holds second place in frontend framework "experience with" and "sentiment," just behind React, with 6+ million weekly downloads on socket.dev and npmstats, ahead of both Angular and Svelte. So, I guess we have a different definition of "dead."
What do you mean? Vue is perfectly capable and mature
This is a really good article. It’s not my bailiwick, but it must be extremely useful for folks that work in this space.
> When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I liked reading that. It’s actually surprising how few developers think that way.
> Mobile is the web
That’s why.
I know many people that don’t own a computer, at all, but have large, expensive phones. This means that I can’t count on a large PC display, but I also can reasonably expect a decent-sized smaller screen.
I’ve learned to make sure that my apps and sites work well on high-quality small screens (which is different from working on really small screens).
The main caveat, is the quality of the network connection. I find that I need to work OK, if the connection is dicey.
> When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I've been there myself as a Dev and later on as a manager. You have to really watch out not getting locked into local minima here. In most cases its not bundle size that wins this but engineering an app that can gracefully work offline, either by having the user manually pre-load data or by falling back to good caches.
> good caches
Some of the most challenging code that I write, is about caches.
Writing good cache support is hard.
I think writing good cache support _can_ be very, very hard.
But in cases as grandparent describes, you do have significant wiggle room.
I write native apps (see "not my bailiwick," above).
It's fairly difficult, for me. The app can do a lot, but sometimes, the data needs to be fresh. Making the decision to run an update can be difficult.
Also, I write free software, for nonprofits, so the hosting can sometimes be a bit dodgy.
Ignoring the content of the post for a second (which IMO was excellent), the quality of the writing here is remarkable. This is a dry technical topic at heart and yet i enjoyed reading that entire report. It was as informative as i could hope for whilst still being engaging.
What a joy to read.
It’s 10,000 words and a curious mixture of dense and sparse. There’s quite a bit of duplication (especially of figures), a fair bit of circumlocution in the narrative sections, and a lot of meaninglessly precise figures, half of which should have been omitted altogether. I am confident it could be significantly improved by a hard cap of 5,000 words, and suspect even 2,000 words could still be better (though 1,000 would definitely be too short to convey it all). Even apart from that, it definitely needed a table of contents, to set expectations.
As a general challenge to people: write your article, then see if you can halve its length without losing much. If it felt too easy, repeat the process! There’s a family of well-known quotes that amount to “sorry for writing a long letter, I didn’t have time to write a short letter”. Concise expression is not the easiest, but very valuable. Many a 100-page technical book can be improved by reduction to a one-page non-prose overview/cheat sheet (perhaps using diagrams and tables, but consider going more freeform like you might on a whiteboard) plus a ten page abridged version.
This isn't just poor writing, it's ChatGPT-padded slop.
But the same is true for the content itself, no business is paying you to actually build the same app 10x, especially so if it's something as trivial as a kanban board.
They'd comfortably pay for 10 AI-assisted versions. It's a trivial demo app so that implementing it 10 times is feasible - it's just to learn what to build their main app in.
I wouldn't measure how good/fast/performant a library is looking at the results of the very first LLM attempt at doing a trivial task using that library. If you don't know the libraries well enough to spot some improvements the LLM missed, the only thing you're judging is either how sane the defaults are or how good the LLM is at writing performant code using that library, none of which are equivalent to how good the library is.
Also, performing well in a prototype scenario is very different than performing well in production-ready scenario with a non-trivial amount of templates and complex operations. Even the slowest SSGs perform fast when you put three Markdown posts and one layout in them, but then after a few years of real-world usage you end up in a scenario where the full build takes about half an hour.
Kinda cool that you can do that in an afternoon, but absolutely useless as a benchmark of anything.
Eventually english textbooks are going to start including this isn't... it's pattern because it's so prevalent in ai slop. I close anything I read now at the first sign of it.
Mhh, I found it repeated sentences again and again. It was kinda odd to read at times.
Before starting new projects I would always do research like this and try new things. But I’ve stopped looking at what is out there. I have landed on Django/React(vite). I have mastered this and can go from idea to app running in production in a matter of hours. I know there are better, faster, and more modern alternatives. But I just don’t care anymore. Maybe I’m just web framework jaded. I rather learn something else than look through the docs of yet another web framework.
To be honest, as long as your app isn’t doing something crazy complex, it’s going to be fast enough for most people even on the slowest stack. I wouldn’t worry about it, personal efficiency is way more important most of the time I’d say.
> Maybe I’m just web framework jaded.
At the end of the day there have been a lot of new things in web development but none of them are of such a significance that you’re missing out on anything by sticking with what works. I personally just like to go with a mature backend framework (usually Laravel or Django) and minimal JS on the frontend. I’ve tried many of the shiny new libraries but have not seen much reason to switch over.
I’ll speak for you:
You’ve stopped caring because it. never. ends. Really.
The author noted that Solid is very similar to React but with a simpler mental model. This has been my experience as well. And its faster too.
I'm a fan of Solid for the same reasons.
I particularly like that (JSX aside) it's just JavaScript, not a separate language with its own compiler like Svelte (and by the sounds of it Marko, which I hadn't heard of before). You can split your app into JS modules, and those can all use Solid signals, even the internal bits that don't have their own UI.
How would you compare it to Preact?
Interesting to see Marko and Solid topping the performance metrics. Ryan Carniato* was a core team member of Marko and started Solid. I wouldn't be surprised if SolidStart can eventually lower its bundle size further.
*) https://github.com/ryansolid
The article is a bit disappointing in that it focuses too much on bundle size. Bundle size is important for sure, especially in rural areas with poor mobile signal, but time-to-interactive is imho more important, and that's where resumable frameworks like qwik and marko6 shine
Solid is great for raw rendering speed, but it hydrates just like react (unless you use an islands framework on top like astro which has its own limitations), while qwik and marko are resumable out of the box
I wish he would have combined Astro with solid instead of HTMX for a more direct comparison
I prefer to use whatever I'm more comfortable with than something that is measurably the fastest horse in the stable. Trading dev time, skill and comfort for few kb of memory and few ms of speed seems pointless to me.
By the way, my "horse" of choice is Quasar(based on Vue) and has been for years now.
Thanks for posting, a lot of effort went into that and I think the quality shines through in the write up.
I write pretty lean HTML/vanilla JS apps on the front end & C#/SQL on the backend; and have had great customer success on mobiles with a focus on a lot of the metrics the author hammers home.
I believe the biggest performance hit lives in je inability to force reload a cached file with js (or even html(!)).
Setting a header only works if you know exactly when you are going to update the file. Except from highly dynamic or sensitive things this is never correct.
You can add ?v=2 to each and every instance of an url on your website. Then you update all pages which is preposterous and exactly what we didn't want. As a bonus ?v=1 is not erased which might also be just what you didn't want.
I never want to reload something until I do.
This is a solved problem. All modern javascript bundlers append a hash to the filename, so even if cached indefinitely the js that hits the browser will update when it has changed as the url will change.
There are also other solutions if you need to preserve the url that are cleaner than appending a query string, like etags
The standard solution is to have small top-level HTML files with short expiration (or no caching at all), then all the other assets (CSS, JS, images) have content-hashed filenames and are cached indefinitely.
Vite gives you that behaviour out of the box.
I greatly appreciated this article and have found the data very useful - I have shared this with my business partner and we will use this information down the road when we (eventually) get around to migrating our app from Angular to something else. Neither of us were surprised to see Angular at the bottom of the league tables here.
Now, let's talk about the comments, particularly the top comment. I have to say I find the kneejerk backlash against "AI style" incredibly counter-productive. These comments are creating noise on HN that greatly degrades the reading experience, and, in my humble opinion, these comments are in direct violation of all of the "In Comments" guidelines for HN: https://news.ycombinator.com/newsguidelines.html#comments
Happy to change my mind on this if anyone can explain to me why these comments are useful or informative at all.
I am the only one shocked that no comparison or test or thinking of native development? Web dev are this closed to other languages? I came here for this kind of comparison because of the article. headline
It's not about being closed to other languages, it's about being economically pragmatic in many, many cases.
As shown in the article, you can build ONCE an app that loads in milliseconds by just providing an url to any potential customer. It works on mobile and on desktop, on any operating system.
The native alternative requires:
- Multiple development for any platform you target (to be widely used you need *at least* ios, android, macOS and windows.) - Customers are required to download and install something before using your platform, creating additional friction.
And all of this to obtain at most 20-30ms better loading times?
There are plenty of cases where native makes sense and is necessary, but most apps have very little to gain at the cost of a massive increase in development resources.
Native to the web like web components or a native platform?
The problem of native apps isn't the language but the app stores.
Web deployment is easier, faster and cheaper.
If I trust this article, React is a (relative) catastrophe.
Can someone explain why ? What precisely would make React sooo slow and big compared to other abstractions ?
I'm not exactly sure why "big" but it's slow because it has worse change tracking and rendering model, which requires you to do more work to figure out what needs to be updated, unless you manually opt-out when you know. Solid, Vue and other signals based frameworks have granular change tracking, so they can skip a lot of that work.
But this mostly applies to subsequent re-renders, while things mentioned in the article are more about initial render, and I'm not exactly sure why does React suffer there. I believe React can't skip VDOM on the server, while Vue or Solid use compiled templates that allow them to skip that and render directly to string, so maybe it's partially that?
React isn't slow. It can be pretty fast (thanks to hooks like useTransition or useOptimistic, and now React Compiler, etc.) - it's just that it takes a lot of learning and work to use React correctly. Some people don't like that, and that's why other frameworks with different trade-offs exist.
The other thing is that React is too big in terms of kBs of JavaScript you have to download and then parse (and often, thanks to great React ecosystem, you use many other libraries). But that's just another trade-off: it's the price you pay for great backwards compatibility (e.g. you can still use React Class components, you don't have to use hooks, etc.).
I want to preface this by saying I have nothing against React, I have used it professionally for a couple years and it's fine and perfectly good enough.
That being said React is slow. That is why you need useTransition, which is essentially manual scheduling (letting React know some state update isn't very important so it can prioritise other things) which you don't need to do in other frameworks.
useOptimistic does not improve performance, but perceived performance. It lets you show a placeholder of a value while waiting for the real computation to happen. Which is good, you want to improve perceived performance and make interactions feel instant. But it technically does not improve React's performance.
Honestly I don't know about mobile apps but React for a desktop website has no performance issue provided that the code is of sufficient quality.
It is pretty established at this point that React has (relative) terrible performance. React isn't successful because it's a superior technology, it's successful despite being an inferior technology. It's just really difficult to beat an extremely established technology and React has a huge ecosystem, so many companies depend on it that the job market for it is huge, etc.
As to why it is slow, my knowledge is super up-to-date (haven't kept up that well with recent updates), but in general the idea is:
- The React runtime itself is 40 kB so before doing anything (before rendering in CSR or before hydrating in SSR) you need to download the runtime first.
- Most frameworks have moved on to use signals to manage state updates. When state change, observers of that state will be notified and the least amount of code will be run before updating the DOM surgically. React instead re-executes the code of entire component trees, compares the result with the current DOM and then applies changes. This is a lot more work and a lot slower. Over time techniques have been developed in React to mitigate this (Memoization, React Compiler, etc.), but it still does a lot more work than it needs to, and these techniques are often not needed in other frameworks because they do a lot less work by default.
The js-framework-benchmark [1] publishes benchmarks testing hundreds of frameworks for every Chrome release if you're interested in that.
[1]: https://krausest.github.io/js-framework-benchmark/2025/table...
> It is pretty established at this point that React has (relative) terrible performance. > it is slow
You're not answering my question, just adding some more feelings.
> The React runtime itself is 40 kB
React is < 10 kb compressed https://bundlephobia.com/package/react@19.2.0 (add react-dom to it). That's not really significative according to the author's figures, the header speaks about up "to 176.3 kB compressed".
> Most frameworks have moved on to use signals to manage state updates. When state change
This is not kilobytes or initial render times, but performance in rendering in a highly interactive application. They would not impact rendering a blog post, but rendering a complex app's UI. The original blog post does not measure this, it's out of scope.
> You're not answering my question, just adding some more feelings.
Well you seemed surprised by this fact, even though it's a given for most people working in front-end frameworks.
> React is < 10 kb compressed https://bundlephobia.com/package/react@19.2.0 (add react-dom to it).
I don't know how bundlephobia calculates package size, and let me know if you're able to reproduce them in a real app. The simplest Vite + React app with only a single "Hello, World" div and no dependencies (other than react and react-dom), no hooks used, ships 60+ kB of JS to the browser (when built for production, minified and gzipped).
Now the blog post is not just using React but Next.js which will ship even more JS because it will include a router and other things that are not a part of React itself (which is just the component framework). There are leaner and more performant React Meta-Frameworks than Next.js (Remix, TanStack Start).
> This is not kilobytes or initial render times, but performance in rendering in a highly interactive application
True, but it's another area where React is a (relative) catastrophe.
The large bundle size on the other hand will definitely impact initial render times (in client-side rendering) and time-to-interactive (in SSR), because it's so much more JS that has to be parsed and executed for the runtime before even executing your app's code.
EDIT: It also does not have to be a highly interactive application at all for this to apply. If you only change a single value, that is read in a component deep within a component tree you will definitely feel the difference, because that entire component tree is going to execute again (even though the resulting diff will show that only that deeply nested div needs to be updated, React has no way of knowing that beforehand, whereas signal-based framework do)
And finally I want to say I'm not a React hater. It's totally possible to get fast enough performance out of React. There are just more footguns to be aware of.
This is a great comparison, but it depends so much on what sort of website or web app you are building. If you are building a content site, with the majority of visitors arriving without a hot cache bundle size is obviously massively important. But for a web app, with users regularly visiting, it's somewhat less important.
As ever on mobile it's latency, not bandwidth, that's the issue. You can very happily transfer a lot of data, but if that network is in your interactive hot path then you will always have a significant delay.
You should optimise to use the available bandwidth to solve the latency issues, after FCP. Preload as much data as possible such that navigations are instant.
Let’s be honest: “desktop-only” is usually an excuse to skip performance discipline entirely
No, it is excuse not to invest money in places where users won't pay.
For questions about mobile - yeah we get requests for showing it on mobile but app in app store is hard requirement, because of discoverability. People know how to install app from app store and then they have icon. Making PWA icon is still too much work for normal people.
I would need "add to home screen" button in my website that I could have user making icon with single click, then I could go with PWA.
This post made me open up the Svelte docs again.
I'd be interested in seeing React Native in this comparison.
I'm not overly familiar with it, but we use it at work. I've no idea if I should expect it to be quicker or slower than something like Next.
What do you hope to see from the result of that comparison?
To gauge where RN sits on the spectrum of fast to slow.
> when I built the first implementations and started measuring, something became clear: the issues I was seeing with Next.js weren’t specific to Next.js. They were fundamental to React’s architecture.
So here some obscure Next.js issues magically become fundamental React architecture issues. What are these? Skill issues?
Excellent work, I love experiments like these. Unsurprisingly the site is also lightning fast to load.
> 40ms round-trip time
In that case how can you possibly get 35ms FCP? Am I missing something?
Comparing something like next.js to other frameworks doesn’t make much sense anymore given that most webdevs choose DX and easy deployment above anything else. Vercel’s growth is proof of that.
Reference to SpeedCurve, they have a skip link on their home page yet nowhere id="main" was found.
I'm surprised there's no mention of Flutter. If the goal is mobile performance, Flutter would be top of mind - you can build to both native apps and web.
Can Marko run static without a server? Can any of these?
All of them can, but you get most benefit of a full-stack javascript framework if you are indeed running server js. But you can build statically in any of them (assuming you are not using any server-only features) and deploy as plain html/js.
Yeah, Svelte can.
Can’t most of them? Certainly React and Angular can as well.
Nuxt can too
150kb downloads almost instantly, even on 3G. Most websites have an image bigger than that somewhere on their homepage. It's not worth changing how I work.
JS can be 100x or even 1000x times more expensive to process than images. JS also blocks the main thread, while images can be processed in the background (and on GPU).
If it was only 150kB for most sites. Usually that's followed up with multiple assets, API calls, often chained. Making the site slow.
The app in the article is a relatively simple demo app. These are the build times and sizes from a real relatively large react SPA I help maintain
At these sizes, an islands/resumable based approach can trim a ton of loading time on mobileSee Performance Inequality Gap https://infrequently.org/2024/01/performance-inequality-gap-...
Your attitude is exactly why our supercomputers struggle to display even the simplest things with any kind of performance, and why pure text takes multiple seconds to appear
Seems overly concerned with bundle size which I'm not sure really ever matters? Certainly smaller is better but is it that big of an impact?
Yes, it matters a lot, especially on mid/low end devices.
Great post. Im moderately annoyed that on Safari mobile it has an incorrect and super annoying horizontal scroll.
Any reason to not include native application given how important mobile experience was.
Reasons are given near the end:
> Here’s where this gets bigger than framework choice. When you ship a native app to the App Store or Google Play instead of building a web app, you’re not just making a technical decision. You’re accepting a deal that would’ve been unthinkable twenty years ago. Apple and Google each take up to 30% of every transaction (with exceptions depending on program and category). They set rules. They decide what you can ship. They can revoke your access tomorrow with no recourse. You have no alternative market. You can’t even compete on price because the fee is baked into many transactions.
Appstores complicate lifecycles by orders of magnitude.
> ... all deliver instant 35-39ms performance. The real differentiator? ...
Thanks ChatGPT for your valuable slop. Next article.
Interesting, in my case I'm using Capacitor (since I need bluetooth and it's not well supported across browsers), and I want the app to be able to work offline too for some features, so I don't really care about the bundle size (it ships with the native app) and I prefer SPA than MPA so I can build it once and ship it with the code
(I'll be that guy since the article emphasizes a good mobile web experience so hard)
You might want to fix your horizontal scroll on mobile. I should basically never have a full page horizontal scrollbar on a page that is mostly just text.
This made me look at our current app, it's a whooping 10MB just to get on the landing page. Build with Angular.
.. creating a maintenance issue right now.
> This isn’t just an inconvenience. It’s technofeudalism.
> This isn’t a todo list with hardcoded arrays. It’s a real app with database persistence (appears twice)
this article was written by ChatGPT. I'm tired
it doesn't matter. in 10 years, few people will access websites directly.
> The web is mobile. Build for that reality.
Ugh. That thinking is what gets you things like mandatory login via apps for your desktop. And not every application makes sense on a phone. And some Web Applications just require low latency high bandwidth internet to work properly.
> some Web Applications just require low latency high bandwidth internet to work properly.
But the vast majority do not. And this haranguing is an opportunity / defensible position to put more efforts and resources into performances. If nothing else, think of it as a Trojan horse to make software suck less.
>If nothing else, think of it as a Trojan horse to make software suck less.
My experience has been that the proliferation of mobile devices has made my desktop experience consistently worse and I struggle to come up with an example where it didn't.
> But the vast majority do not. yeah and that's why they are shit and barely work.
Even a php app without decorations would be faster and better for most applications.
> That thinking is what gets you things like mandatory login via apps for your desktop.
"the web is mobile" = strictly "apps" ?
The guy is such a web zealot that he refuses to make the sensible engineering tradeoff that favors speed and offline capabilities over platform ubiquity. Most sane people would write a native app for this sort of thing if money was on the line.
Am I missing something here? The mobile SPA app can be deployed using tools like capacitor to a device and the framework along with all static content is loaded into the app bundle. In such case it makes no (realistic) difference which framework is selected and it matters more how the background/slow transfers are handled with data-only API requests, possibly with hosted images. With the background workers PWA can be built as well, streamlining installation even more.
Does that involve shipping a native wrapper for your web app?
If so, you have the extra cost, effort and bureaucracy of building and deploying to all the different app stores. Apple's App Store and Google Play each have various annoyances and limitations, and depending on your market there are plenty of other stores you might need to be in.
Sometimes you do need a native or native-feeling app, in which case a native wrapper for JS probably is a good idea, other times you want something lightweight that works everywhere with no deployment headaches.
As much as I agree with app deployment headaches, apps provide something a website cannot (except PWA) - ability to do stuff offline, log and register data which can be uploaded when connection is re-established. When talking about user experience - launching the app, selecting new -> quote -> entering details -> save -> locking the phone without worrying or waiting, knowing that it will eventually get uploaded, is much more convenient than walking with the phone around the property to get better reception to even load the new quote page.
UX matters, and user does not care if the native wrapper or 500kB of js is there or not, as long as the job is done conveniently and fast.