Vercel has been a string of puzzling decisions since the introduction of the app router. Next could've become JS' Rails, instead it is a pile of confusing mess. Turbopack, caching, middleware (now called "proxy"), their layout components are silly, implementation of RSC, pushing for unfinished alpha versions of everything. Next is a conceptual mess of initialisms (RSC, SSR, PPR, SSG, ISG). Hosting integrations are semi-proprietary and they reliably break basic JS APIs like fetch and redirects.
And despite all of that, they don't ship the basics that every app needs, like i18n and auth.
Next should no longer be chosen under any circumstances.
there was a hey day where react was concerned with client side stuff only - yeah redux was a little complex but you needed to learn it once, react router didn't change every two days. we built incredible enterprise stuff around that stack react (with classes), react-router, redux. that's the last time I enjoyed react.
everything else is now a hamster wheel. running but staying in one place. marketing and money now drives everything else. Maybe there's a fire & motion strategy going on some of the places.
Next.js isn't React. Plain React has been pretty stable, if you ignore server components. And it's quite easy to ignore them despite all the noise around them, they're entirely optional.
For the vast majority of cases there are many non-DIY alternatives within the React world and that is before asking the "do we even need React?" which if course one should.
This is beside the point. And I can't imagine how anyone would want to write templates in anything other than JSX and yet here we are.
Rails and friends aren't about a single technical choice, but providing a complete and opinionated framework to build web apps that conceptually fit in to one person's head.
Rails is that. Django is that. Laravel is that. Next is the opposite, where Next-specifica around caching won't fit in to a single head, let alone the app you're building.
I have yet to reach the limits of doing a Vite create and installing react router myself for the several entirely client side apps we manage. It has sane build defaults and for whatever definition of ‘works’ is possible in JS, ‘just works’. If it becomes too complex for that basic setup it usually means we’ve over-complicated something.
Where we have a need for server side, nodejs just never felt natural for us so we stuck with java springboot or flask/fastapi as appropriate.
Dang, looks like wouter does the same thing as react router v6+ and nested routes don't get all params / paths of the route. ~~Also doesn't have react router v5's route-string typescript parsing.~~
> Also doesn't have react router v5's route-string typescript parsing.
It does, assuming you're talking about automatically parsing "/foo/:id" and getting a typed "{ id: string? }" route params object out of it? Wouter does that when using typescript.
I'd love to hear more about what motivated the switch. All the additions to react-router are, afaict, opt-in. React-router has 3 "modes"[0] and the declarative mode seems pretty much exactly what the classic library is like with some extra components/features you don't have to use
Thought I've enjoyed the code-splitting and access to SPA/SSR/SSG/etc strategies that come with the "framework" mode
I'm using the React Router v7 in framework mode, fetching data directly from the database in server-side components. So far, it works reasonably well as long as we avoid mixing server code with client components. However, although it offers the convenience of writing frontend and backend code together, the mental burden added by this approach makes me feel it is not worth it more and more often.
I’ve been using Wouter on multiple medium sized projects for 3-4 years now. I’m never going back to react-router if I can avoid it: a hellhole of API churn and self promotion
I’m kind of agreed on this but — many times now I’ve also wanted static site components in my app too, and in a standard express+react app that gets awkward.
I think there must be a better way. I don’t think it’s next.js and I’m not convinced it’s Astro either. There’s still room for new ideas here, surprisingly
Around 8y ago, when Angular vs React was still a war worth reading, frameworks were I think in their final state. They gave you basic tools, and you could build applications with them. I felt like framework creators didn’t treat us like babies who needed handholding.
Idk if a new generation of younger developers took over, but things started becoming too shiny. Blog posts were no longer about performance, ease of use, same solutions. I couldn’t even understand some post titles. There is just no bandwidth to follow these things anymore. Why is a router a thing that needs to be continually rebuilt and tinkered with. Did we not learn ages ago how routers should work?
What innovation are we seeking? Is it just developers treating frameworks like their weekend experiments?
> Why is a router a thing that needs to be continually rebuilt and tinkered with. Did we not learn ages ago how routers should work?
Nyes. The biggest innovation in the past 5 years has been routers that can coordinate loading data because they’re perfectly positioned to know what you’re about to access.
This is a hard problem that we’ve been solving forever. It feels like super tedious formulaic work to write an optimized SQL query or series of API requests that fetches all the necessary data with the fewest possible lookups. So we try to automate it with a sufficiently smart compiler of some sort. Query planners inside a database, ORMs, graphql, routers, memory managing compilers, it’s all chasing the same dream – what if the computer Just Knew the quickest way to grab just the right amount of data.
I've re-read this comment at least 5 times and feel like I'm having a stroke reading it each time. And something similar happens really often when I enter the hype-driven side of React these days..
I do wish I had a more useful critique, and I'm not even trying to be mean (or boorish as it were) but you're rolling so many things up into each other that there's no useful way to interpret the statement. At least, not without ending up giving you a great chance to just say "no no no you completely missed what I'm saying" and then coming up with a new equally dizzying statement.
How you manage to drag query planners into routers into compilers, how are these chasing one dream or fungible or even barely correlated, I don't even know.
-
It's awful and sad how tech is one of the few fields that just insists on making things worse for itself. You can walk into McDonalds and ask how the process can be improved, and I guarantee every suggestion is targeted at making their jobs easier in a way that at least superficially aligns with making the service quicker, which is something the company does care about.
In tech you ask and someone goes on a poetic journey about how the paper cups, the drive-thru speaker, and the coffee machine are all chasing the same dream, and also we need a $100,000 espresso machine that takes 10 minutes of intense labor to brew a shot because then I'll be qualified to operate a $100,000 espresso machine at my next job which pays better than McDonalds.
We did not figure out how to brew coffee before, that was all wrong and we needed to make the process at least 10x more complicated.
> How you manage to drag query planners into routers into compilers, how are these chasing one dream or fungible or even barely correlated, I don't even know.
Let me expand.
In component-based UIs you have components that fetch their own data. Declare their data dependencies if you will. This makes them independent and easy to move around.
You can blindly take a component that renders, say, the user’s name and avatar and move it anywhere in your app. It’s going to know how to recognize the current user, fetch their data, and render what it needs. This is great for reusability and flexibility.
Now what happens if you have a bunch of these components on a page? You get lots of uncoordinated data fetches. Each component is independent so it doesn’t know about the others so they can’t coordinate.
But who does know about them? That’s right, the router knows. So the router can make a more coordinated data fetch and get data for all those components.
This is the same job as a controller in the classical MVC approach where you write data fetching (sql queries) in the controller so data is ready when you access it in the template (jinja or whatever). Exact same problem, different naming conventions.
And just like with ORMs you have the same N+1 issue where your template pokes an attribute that was not prefetched and triggers a bunch of data fetches (sql queries). So you end up doing lots of tedious work to always hoist up the joins into your controller to trick the ORM into having that data preloaded.
Wouldn’t it be great if somehow your ORM could understand all the render logic and write its own single query that gets all the data? Yes it would. Just as wouldn’t it be great if your router could understand all the fetches your UI components will do and make one big request in the beginning? Ofc.
Now here’s how this relates to query planners: Wouldn’t it be great if you could just declare what data you want and the computer figured out how to read just the right amount from disk, join together the relevant bits, and gave you the output? Without you having to write a bunch of manual disk reading logic, loops to clean and correlate the data, and while also minimizing the amount of disk access manually? Of course it would.
Why should it be the router? A saner option is to let the parent component coordinate the data fetches for the child components. What you are suggesting is way out of the normal responsibility of a router.
The router is just the most parenty component. It’s the component that takes some input (like a url) and uses what is essentially a giant switch statement to decide what gets rendered.
> Now what happens if you have a bunch of these components on a page? You get lots of uncoordinated data fetches. Each component is independent so it doesn’t know about the others so they can’t coordinate.
No. You write a data layer that deduplicates requests. And since you typically need state management for fetches for UX, why not roll it all up into a tidy hook?
I guess the lynchpin is under-engineering at the request layer "forcing" massive over-engineering at the routing layer.
-
By the way, your self-answered rhetoricals are really really off the reservation in terms of relevance so I'm not even going to go there other than pointing out that a bad analogy is worse than no analogy.
You're trying to explain request waterfalls. Just say request waterfall. Look up the term if you have to, it's rare that you need to invent a definition for a basic and common problem.
the niche that Next dominates rightfully is these latency optimized but highly dynamic and parallelizable sites, minimal "per user" state lives close to the user, it's vertically integrated, and gives tools to frontend-oriented devs.
React expanded into server-side, it's messy now, but not that complicated.
Things look like the SpaceX rocket motor with all the diagnostic stuff still on, but in a few revisions it'll likely look (and feel) more sleek.
...
And it's true that the end-to-end experience matters at restaurants, be they fine dining 3 thousand Gault Milleu stars or a worldwide franchise with so perverse incentives built in that it should be 18+ anyway (and not just because of the hyper-palatable sauces).
...
And the argument is that these things matter especially when you have a lot of users.
...
That said the people responsible for docs at NextJS should be sent to the fullstack mines.
Kind of, when you have the influencer culture and startups that hire based on Github performance, one needs to standout somehow, e.g. building frameworks.
You've captured something important here. There's been a shift from "solve problems" to "create novel patterns." The incentives are all wrong—framework authors get validation from innovation theater, not from boring reliability.
I think part of it is that the web developer community exploded. More developers = more people trying to make their mark = more churn. Everyone wants to be the person who "fixed React" or "reimagined routing."
But when you're actually building a product that users depend on, you realize how much of a tax this is. Every framework "upgrade" that breaks things is time NOT spent on features, user feedback, or actual problems.
The irony is that the best products are often built with "boring" tech that just works. Instagram ran on Django for years at massive scale. Basecamp is still Rails. These teams focused on users, not on having the hottest stack.
What frameworks/tools have you found that stayed stable and just worked over the years?
what's the moderation policy/etiquette for calling out obviously LLM-generated comments? doing so feels like more heat than light, but letting them pass by without saying anything feels like another step towards a dead internet.
I think you've got to handle them on their own merits. Ultimately humans can write like AI and AI can write like humans, but a good answer is a good answer and a bad one is bad. Karma and moderation already have plenty of ways to handle inappropriate answers.
Our experience entirely. We replaced next.js with a simple router and everything in every sense got simpler, and FASTER. It was a remarkable education, replacing that crazy thing.
It was pretty clear from the beginning it wasn't necessary. It's funny how many junior developers will rant about how you must avoid shipping unnecessary code to the client all costs or you will die. Well, actually, I've been building React apps for over 10 years without any of this RSC shit and those apps made many millions of dollars, so it's actually not a problem.
We ship a multi megabyte package to our customers and preload massive amounts of data.
Nobody complains about it. In fact, they rave about how fast it is. They don’t care that the first page load is slow. Heck, they’re probably checked out between tasks anyways. But, once they’re in there, they want it to be fast.
Really? Do you have links to any good analysis on this?
I'd be shocked, given that the bun team has shown a ton of maturity in all their messaging as far as API compatibility, engineering chops, and attention to detail. Nothing I've seen suggests that they'd be sloppy on the security side.
The issue list is full of bugs with segfaults. At least used to be when I last time checked it. But that is what you get with C/C++/Zig et all. It takes a lot of time to get good enough fuzzing and testing process to eliminate all that. In Chrome, for example, you could get $20,000 bounty just for demonstration of memory issue without an actual exploit.
"1 more step function in performance bro, V8 was cool but just 1 more and we'll have enough to make CRUD apps in JS, bro I promise"
Or you can use React Query/Tanstack Query, not waste cycles and bandwidth on RSC, get an app with better UX (http://ilovessr.com), and a simpler mental model that's easier to maintain.
Yeah Vite+Reat+Tanstack SPA apps is definitely the way to go for a majority of web apps. I would still stick with nextjs for ecommerce or pages that need to load instantly when clicked from google however.
why would you go through the trouble of doing SSR on a user profile form?
it's not needed for SEO, caching it is pointless, the server won't render it faster than the client so you are not speeding up anything.
what's the point of complicating anything about it? seems that at least some of their suffering is a self-inflicted wound, maybe the author just picked a bad example.
It feels like so much work has been done to just end up going full circle back to Django-style website applications. All of these frameworks have continually re-solved problems that were already solved in something other than Javascript, and then people write blogs about how they're surprised about it. It feels a bit uncanny to see.
The horror of needing to replace a routing layer. Why is this not a solved problem?
This is an undervalued advantage of using steady frameworks like Rails that in essence is the same as 20 years ago, but with lots of extras. I don’t remember any big changes in the routes at least. Nor in any of the other basic building blocks.
You could come back to rails after a 10 year break and pick up pretty quickly where you left off
I had to fully deattach routing and lazy loading because it was very slow and uncomfortable. Nextjs is well suited for half-static website that you need tomorrow. But once you get to something complex it starts showing so many issues.
For a project at work we chose next because the team knew react and other people in the company know react. It is quickly becoming the worst architectural decision in my whole carrier. I wish we would've picked anything else...
I noticed the author mentioned the stack they are currently using is NextJS + Hono, and according to the code in the blog, Hono is used as the API backend to provide data for NextJS through calls like `fetchUserInfo`. This is where I really got confusing, why is it designed like this? NextJS is already a full-stack framework, not to say you used RSC, you can directly get user data from the database in NextJS. If you decide to use Hono for HTTP API, which is fine because it's more light weight and adaptive, then why RSC rather than making the frontend a SPA? Even if you use NextJS to write SPA it would looks much reasonable to have both NextJS and Hono used together.
It's crazy that NextJS became the default new developer tooling.
It's a bit unfortunate that whoever can scream the loudest with their marketing to get people to git clone their framework tends to grow the fastest.
There are so many layers of abstraction that are simply not necessary in most frameworks and that risk over complicating or under complicating many products.
It's almost like blindly defaulting towards any opinionated solution is not what we should be teaching people to do... /Sarcasm
>This is because when Next.js loads the actual Server Component, no matter what, the entire page re-mounts. I was begging Next.js to just update the existing DOM and preserve state, but it just doesn't.
YES! YES! I FEEL SO SEEN RIGHT NOW! I find this behavior unbelievably frustrating. It's hard for me to understand why they ever even shipped RSC's without fixing this.
The issue is everyone's optimizing for blog post metrics, not actual problems. "Look at this new pattern!" gets clicks. "We kept it simple and it just works" doesn't. Same thing happened with microservices - everyone rushed in because it sounded cool, then spent years dealing with distributed systems hell.
The biggest facepalm moment I had was when we switched Levels.fyi from gulp.js to next.js. Our pagespeed, hosting costs, etc all took a significant hit. We're experiencing the same issues as described in the post and weighing our options to transition as well. Avoid next.js / vercel at all costs.
Appreciate it! We're still on nextjs. Will def put a blogpost together as we optimize / move away. Thankfully, AI makes large-scale mostly repetitive migrations like these much simpler.
I'll try to review the article with comments to make this a more critical discussion instead of just hates on Next.js (I'm just a Next.js developers for years now and am quite happy with it - but I do agree it requires some deeper understanding)
> React is now using the words "server" and "client" to refer to a very specific things, ignoring their existing definitions. This would be fine, except Client components can run on the backend too
It was hard discussions to come up for naming of these things in the beginning. Even calling them "backend" and "frontend" (as they suggest in article) wasn't clear about their behavior semantics. I understand the naming annoyances but it's a complex issue that requires lots more thought than just "ah we should've called it like this"
> …This results in awkwardly small server components that only do data fetching and then have a client component that contains a mostly-static version of the page.
> // HydrationBoundary is a client component that passes JSON
> // data from the React server to the client component.
return <HydrationBoundary state={dehydrate(queryClient)}>
<ClientPage />
</HydrationBoundary>;
It seems they're combining Next's native hydration mechanism with TenStacks (another framework) in order to more easily fetch in the browser?
To follow on their WebSocket example where they need to update data of a user card state when a Websocket connection sends data. I don't see what would be the issue here to just use a WebSocket library inside a client component. I imagine it's something you'd have to do to in any other framework, so I don't understand what problem Next.js caused here.
What they're doing screams like a hack and probably the source their issues in this section.
> Being logged in affects the homepage, which is infuriating because the client literally has everything needed to display the page instantly
I'm not sure I understand this part. They mention their app is not static but instead is fully dynamic. Then, how would they avoid NOT showing a loading state in between pages?
> One form of loading state that cannot be represented with the App Router is having a page such as a page like a git project's issue page, and clicking on a user name to navigate to their profile page. With loading.tsx, the entire page is a skeleton, but when modeling these queries with TanStack Query it is possible to show the username and avatar instantly while the user's bio and repositories are fetched in. Server components don't support this form of navigation because the data is only available in rendered components, so it must be re-fetched.
You can use third-party libs to achieve this idea of reusing information from page to another. Example of this is motion's AnimatePresence which allows smooth transitions between 2 react states. Another possibility (of reusing data from an earlier page) is to integrate directly into Next.js new view transitions api: https://view-transition-example.vercel.app/blog <- notice how clicking on a post shows the title immediately
> At work, we just make our loading.tsx files contain the useQuery calls and show a skeleton. This is because when Next.js loads the actual Server Component, no matter what, the entire page re-mounts. No VDOM diffing here, meaning all hooks (useState) will reset slightly after the request completes. I tried to reproduce a simple case where I was begging Next.js to just update the existing DOM and preserve state, but it just doesn't. Thankfully, the time the blank RSC call takes is short enough.
This seems like an artefact of the first issue: trying to combing two different hydration systems that are not really meant to work together?
> Fetching layouts in isolation is a cute idea, but it ends up being silly because it also means that any data fetching has to be re-done per layout. You can't share a QueryClient; instead, you must rely on their monkey-patched fetch to cache the same GET request like they promise.
Perhaps the author is missing how React cache works (https://react.dev/reference/react/cache) and how it can be used within next.js to cache fetches _PER TREE RENDER_ to avoid entirely this problem
> This solution doubles the size of the initial HTML payload. Except it's worse, because the RSC payload includes JSON quoted in JS string literals, which format is much less efficient than HTML. While it seems to compress fine with brotli and render fast in the browser, this is wasteful. With the hydration pattern, at least the data locally could be re-used for interactivity and other pages.
Yes sending data twice is an architecture hurdle required for hydration to work. The idea of reusing that data in other pages was discussed before via things like AnimatePresence.
What's important to note here is that the RSC payload exists at the bottom of the HTML. Since HTML is streamed by default this won't impact Time-to-first-Render. Again, other frameworks need to do this as well (in other ways but still, it needs to happen)
I totally understand the author's frustrations. Next.js isn't perfect, and I also have lots of issues with it. Namely I dislike their intercept/parallel mechanism and setting up ISR/PPR is a nightmare. I just felt like the need to address some of their comments so maybe it can help them?
As a first I would get rid of tanstack since it's fighting against Next.js architecture.
I'm surprised so many people drank the RSC koolaid. I tried it for maybe an hour and it became painfully obvious very quickly how much harder it is to build something that used to be simple.
I just don't understand the use-case either.
Either you're building an SEO-optimized website and you want that initial page load to be as fast as possible. In this case, just build a static website. Use whatever technology you desire and compile to HTML+CSS.
Or you're building an "app" in which case you should expect users to linger around for a bit and that fat initial payload will eventually be cached, so you really don't need to sending it down on every click. So go full-on with the client-side rendering and simplify your stack a little. You can still do a lot of optimizations like code-splitting and prefetching and this and that, but we don't need this weird mixed modality where some things work in one place but not the other.
Which is pretty much what the author says and I'm glad to see people start to realize this.
> Or you're building an "app" [...] So go full-on with the client-side rendering
I wish companies would take this a step further still and just build a PWA. This gives you access to so many web APIs that can further simplify your stack.
I agree that it's bewildering to see how many companies reach for Nextjs for webapps that don't need SEO optimization but some of the more complex rendering strategies can still be useful for web apps as well. Even for PWAs
If you're building an SEO-optimized website you don't even need to build a static website. Just SSR it like normal (you don't even need streaming) and just chuck a CDN-Cache-Control header on there. You'll get responses in 10s of ms.
I've used the app router and it is fairly nice once you understand how all of it works. But its a bucket load of complexity nobody really needs. A normal react SPA using vite + wouter + react-query works brilliantly.
Vercel has been a string of puzzling decisions since the introduction of the app router. Next could've become JS' Rails, instead it is a pile of confusing mess. Turbopack, caching, middleware (now called "proxy"), their layout components are silly, implementation of RSC, pushing for unfinished alpha versions of everything. Next is a conceptual mess of initialisms (RSC, SSR, PPR, SSG, ISG). Hosting integrations are semi-proprietary and they reliably break basic JS APIs like fetch and redirects.
And despite all of that, they don't ship the basics that every app needs, like i18n and auth.
Next should no longer be chosen under any circumstances.
there was a hey day where react was concerned with client side stuff only - yeah redux was a little complex but you needed to learn it once, react router didn't change every two days. we built incredible enterprise stuff around that stack react (with classes), react-router, redux. that's the last time I enjoyed react.
everything else is now a hamster wheel. running but staying in one place. marketing and money now drives everything else. Maybe there's a fire & motion strategy going on some of the places.
things like inertia help, but not by much.
Next.js isn't React. Plain React has been pretty stable, if you ignore server components. And it's quite easy to ignore them despite all the noise around them, they're entirely optional.
Good luck with that.
We chose Next.js, exactly because they are the only option in many enterprise products as the extension SDK.
No one wants to land into a project and sell the customer that they want to use DYI framework instead of the SDK of the product they are paying for.
Then you didn't choose, you are forced to.
For the vast majority of cases there are many non-DIY alternatives within the React world and that is before asking the "do we even need React?" which if course one should.
Fair but that sounds like a requirement of your project.
Which is completely ignored in an assertion like,
> Next should no longer be chosen under any circumstances.
I can't imagine how slinging ungodly obfuscated JS blobs^w chunks over the wire can ever be the next Rails...
This is beside the point. And I can't imagine how anyone would want to write templates in anything other than JSX and yet here we are.
Rails and friends aren't about a single technical choice, but providing a complete and opinionated framework to build web apps that conceptually fit in to one person's head.
Rails is that. Django is that. Laravel is that. Next is the opposite, where Next-specifica around caching won't fit in to a single head, let alone the app you're building.
How can anything that is completely un-debuggable (like the mentioned chunks) fit in any one person's head?
I have yet to reach the limits of doing a Vite create and installing react router myself for the several entirely client side apps we manage. It has sane build defaults and for whatever definition of ‘works’ is possible in JS, ‘just works’. If it becomes too complex for that basic setup it usually means we’ve over-complicated something.
Where we have a need for server side, nodejs just never felt natural for us so we stuck with java springboot or flask/fastapi as appropriate.
ever since react router got merged with remix to become react router v7, I looked around for simpler version, landed on Wouter which is fine.
Dang, looks like wouter does the same thing as react router v6+ and nested routes don't get all params / paths of the route. ~~Also doesn't have react router v5's route-string typescript parsing.~~
https://github.com/molefrog/wouter?tab=readme-ov-file#route-...
> Also doesn't have react router v5's route-string typescript parsing.
It does, assuming you're talking about automatically parsing "/foo/:id" and getting a typed "{ id: string? }" route params object out of it? Wouter does that when using typescript.
Thanks for the correction, after looking at the types I'm guessing it's this bit: https://github.com/molefrog/wouter/blob/v3/packages/wouter/t...
I'd love to hear more about what motivated the switch. All the additions to react-router are, afaict, opt-in. React-router has 3 "modes"[0] and the declarative mode seems pretty much exactly what the classic library is like with some extra components/features you don't have to use
Thought I've enjoyed the code-splitting and access to SPA/SSR/SSG/etc strategies that come with the "framework" mode
[0] https://reactrouter.com/start/modes
I'm using the React Router v7 in framework mode, fetching data directly from the database in server-side components. So far, it works reasonably well as long as we avoid mixing server code with client components. However, although it offers the convenience of writing frontend and backend code together, the mental burden added by this approach makes me feel it is not worth it more and more often.
Yep, declarative mode is what I use, tried the data mode with loaders and it never really agreed with me as a pattern.
I’ve been using Wouter on multiple medium sized projects for 3-4 years now. I’m never going back to react-router if I can avoid it: a hellhole of API churn and self promotion
I've been using wouter in all of my projects for years after being burned by some react router migration bullshit eons ago.
wouter is great until you need hash routing and then it's shite.
TanStack is the sane option here, whether their router or their start product.
I’m kind of agreed on this but — many times now I’ve also wanted static site components in my app too, and in a standard express+react app that gets awkward.
I think there must be a better way. I don’t think it’s next.js and I’m not convinced it’s Astro either. There’s still room for new ideas here, surprisingly
Around 8y ago, when Angular vs React was still a war worth reading, frameworks were I think in their final state. They gave you basic tools, and you could build applications with them. I felt like framework creators didn’t treat us like babies who needed handholding. Idk if a new generation of younger developers took over, but things started becoming too shiny. Blog posts were no longer about performance, ease of use, same solutions. I couldn’t even understand some post titles. There is just no bandwidth to follow these things anymore. Why is a router a thing that needs to be continually rebuilt and tinkered with. Did we not learn ages ago how routers should work? What innovation are we seeking? Is it just developers treating frameworks like their weekend experiments?
> Why is a router a thing that needs to be continually rebuilt and tinkered with. Did we not learn ages ago how routers should work?
Nyes. The biggest innovation in the past 5 years has been routers that can coordinate loading data because they’re perfectly positioned to know what you’re about to access.
This is a hard problem that we’ve been solving forever. It feels like super tedious formulaic work to write an optimized SQL query or series of API requests that fetches all the necessary data with the fewest possible lookups. So we try to automate it with a sufficiently smart compiler of some sort. Query planners inside a database, ORMs, graphql, routers, memory managing compilers, it’s all chasing the same dream – what if the computer Just Knew the quickest way to grab just the right amount of data.
I've re-read this comment at least 5 times and feel like I'm having a stroke reading it each time. And something similar happens really often when I enter the hype-driven side of React these days..
I do wish I had a more useful critique, and I'm not even trying to be mean (or boorish as it were) but you're rolling so many things up into each other that there's no useful way to interpret the statement. At least, not without ending up giving you a great chance to just say "no no no you completely missed what I'm saying" and then coming up with a new equally dizzying statement.
How you manage to drag query planners into routers into compilers, how are these chasing one dream or fungible or even barely correlated, I don't even know.
-
It's awful and sad how tech is one of the few fields that just insists on making things worse for itself. You can walk into McDonalds and ask how the process can be improved, and I guarantee every suggestion is targeted at making their jobs easier in a way that at least superficially aligns with making the service quicker, which is something the company does care about.
In tech you ask and someone goes on a poetic journey about how the paper cups, the drive-thru speaker, and the coffee machine are all chasing the same dream, and also we need a $100,000 espresso machine that takes 10 minutes of intense labor to brew a shot because then I'll be qualified to operate a $100,000 espresso machine at my next job which pays better than McDonalds.
We did not figure out how to brew coffee before, that was all wrong and we needed to make the process at least 10x more complicated.
> How you manage to drag query planners into routers into compilers, how are these chasing one dream or fungible or even barely correlated, I don't even know.
Let me expand.
In component-based UIs you have components that fetch their own data. Declare their data dependencies if you will. This makes them independent and easy to move around.
You can blindly take a component that renders, say, the user’s name and avatar and move it anywhere in your app. It’s going to know how to recognize the current user, fetch their data, and render what it needs. This is great for reusability and flexibility.
Now what happens if you have a bunch of these components on a page? You get lots of uncoordinated data fetches. Each component is independent so it doesn’t know about the others so they can’t coordinate.
But who does know about them? That’s right, the router knows. So the router can make a more coordinated data fetch and get data for all those components.
This is the same job as a controller in the classical MVC approach where you write data fetching (sql queries) in the controller so data is ready when you access it in the template (jinja or whatever). Exact same problem, different naming conventions.
And just like with ORMs you have the same N+1 issue where your template pokes an attribute that was not prefetched and triggers a bunch of data fetches (sql queries). So you end up doing lots of tedious work to always hoist up the joins into your controller to trick the ORM into having that data preloaded.
Wouldn’t it be great if somehow your ORM could understand all the render logic and write its own single query that gets all the data? Yes it would. Just as wouldn’t it be great if your router could understand all the fetches your UI components will do and make one big request in the beginning? Ofc.
Now here’s how this relates to query planners: Wouldn’t it be great if you could just declare what data you want and the computer figured out how to read just the right amount from disk, join together the relevant bits, and gave you the output? Without you having to write a bunch of manual disk reading logic, loops to clean and correlate the data, and while also minimizing the amount of disk access manually? Of course it would.
Why should it be the router? A saner option is to let the parent component coordinate the data fetches for the child components. What you are suggesting is way out of the normal responsibility of a router.
The router is just the most parenty component. It’s the component that takes some input (like a url) and uses what is essentially a giant switch statement to decide what gets rendered.
> Now what happens if you have a bunch of these components on a page? You get lots of uncoordinated data fetches. Each component is independent so it doesn’t know about the others so they can’t coordinate.
No. You write a data layer that deduplicates requests. And since you typically need state management for fetches for UX, why not roll it all up into a tidy hook?
https://tanstack.com/query/latest
I guess the lynchpin is under-engineering at the request layer "forcing" massive over-engineering at the routing layer.
-
By the way, your self-answered rhetoricals are really really off the reservation in terms of relevance so I'm not even going to go there other than pointing out that a bad analogy is worse than no analogy.
You're trying to explain request waterfalls. Just say request waterfall. Look up the term if you have to, it's rare that you need to invent a definition for a basic and common problem.
the niche that Next dominates rightfully is these latency optimized but highly dynamic and parallelizable sites, minimal "per user" state lives close to the user, it's vertically integrated, and gives tools to frontend-oriented devs.
React expanded into server-side, it's messy now, but not that complicated.
Things look like the SpaceX rocket motor with all the diagnostic stuff still on, but in a few revisions it'll likely look (and feel) more sleek.
...
And it's true that the end-to-end experience matters at restaurants, be they fine dining 3 thousand Gault Milleu stars or a worldwide franchise with so perverse incentives built in that it should be 18+ anyway (and not just because of the hyper-palatable sauces).
...
And the argument is that these things matter especially when you have a lot of users.
...
That said the people responsible for docs at NextJS should be sent to the fullstack mines.
Kind of, when you have the influencer culture and startups that hire based on Github performance, one needs to standout somehow, e.g. building frameworks.
You've captured something important here. There's been a shift from "solve problems" to "create novel patterns." The incentives are all wrong—framework authors get validation from innovation theater, not from boring reliability.
I think part of it is that the web developer community exploded. More developers = more people trying to make their mark = more churn. Everyone wants to be the person who "fixed React" or "reimagined routing."
But when you're actually building a product that users depend on, you realize how much of a tax this is. Every framework "upgrade" that breaks things is time NOT spent on features, user feedback, or actual problems.
The irony is that the best products are often built with "boring" tech that just works. Instagram ran on Django for years at massive scale. Basecamp is still Rails. These teams focused on users, not on having the hottest stack.
What frameworks/tools have you found that stayed stable and just worked over the years?
what's the moderation policy/etiquette for calling out obviously LLM-generated comments? doing so feels like more heat than light, but letting them pass by without saying anything feels like another step towards a dead internet.
I think you've got to handle them on their own merits. Ultimately humans can write like AI and AI can write like humans, but a good answer is a good answer and a bad one is bad. Karma and moderation already have plenty of ways to handle inappropriate answers.
Our experience entirely. We replaced next.js with a simple router and everything in every sense got simpler, and FASTER. It was a remarkable education, replacing that crazy thing.
yeah RSC is totally unnecessary it turns out
It was pretty clear from the beginning it wasn't necessary. It's funny how many junior developers will rant about how you must avoid shipping unnecessary code to the client all costs or you will die. Well, actually, I've been building React apps for over 10 years without any of this RSC shit and those apps made many millions of dollars, so it's actually not a problem.
We ship a multi megabyte package to our customers and preload massive amounts of data.
Nobody complains about it. In fact, they rave about how fast it is. They don’t care that the first page load is slow. Heck, they’re probably checked out between tasks anyways. But, once they’re in there, they want it to be fast.
It's a good idea in theory, the perf just needs to be better. Maybe with bun.
Bun unfortunately isn’t production ready for years for any serious application. Too many security problems.
Really? Do you have links to any good analysis on this?
I'd be shocked, given that the bun team has shown a ton of maturity in all their messaging as far as API compatibility, engineering chops, and attention to detail. Nothing I've seen suggests that they'd be sloppy on the security side.
The issue list is full of bugs with segfaults. At least used to be when I last time checked it. But that is what you get with C/C++/Zig et all. It takes a lot of time to get good enough fuzzing and testing process to eliminate all that. In Chrome, for example, you could get $20,000 bounty just for demonstration of memory issue without an actual exploit.
"1 more step function in performance bro, V8 was cool but just 1 more and we'll have enough to make CRUD apps in JS, bro I promise"
Or you can use React Query/Tanstack Query, not waste cycles and bandwidth on RSC, get an app with better UX (http://ilovessr.com), and a simpler mental model that's easier to maintain.
Yeah Vite+Reat+Tanstack SPA apps is definitely the way to go for a majority of web apps. I would still stick with nextjs for ecommerce or pages that need to load instantly when clicked from google however.
why would you go through the trouble of doing SSR on a user profile form?
it's not needed for SEO, caching it is pointless, the server won't render it faster than the client so you are not speeding up anything.
what's the point of complicating anything about it? seems that at least some of their suffering is a self-inflicted wound, maybe the author just picked a bad example.
It feels like so much work has been done to just end up going full circle back to Django-style website applications. All of these frameworks have continually re-solved problems that were already solved in something other than Javascript, and then people write blogs about how they're surprised about it. It feels a bit uncanny to see.
I am happy to have stayed mostly in Java and .NET land all these two decades, although I do have to put with Next.js for the last three years as well.
These days I get a lot of deju vu with ASP.NET's web forms from 20 years ago.
Promo packets are a hell of a drug
The horror of needing to replace a routing layer. Why is this not a solved problem?
This is an undervalued advantage of using steady frameworks like Rails that in essence is the same as 20 years ago, but with lots of extras. I don’t remember any big changes in the routes at least. Nor in any of the other basic building blocks.
You could come back to rails after a 10 year break and pick up pretty quickly where you left off
I had to fully deattach routing and lazy loading because it was very slow and uncomfortable. Nextjs is well suited for half-static website that you need tomorrow. But once you get to something complex it starts showing so many issues.
For a project at work we chose next because the team knew react and other people in the company know react. It is quickly becoming the worst architectural decision in my whole carrier. I wish we would've picked anything else...
I noticed the author mentioned the stack they are currently using is NextJS + Hono, and according to the code in the blog, Hono is used as the API backend to provide data for NextJS through calls like `fetchUserInfo`. This is where I really got confusing, why is it designed like this? NextJS is already a full-stack framework, not to say you used RSC, you can directly get user data from the database in NextJS. If you decide to use Hono for HTTP API, which is fine because it's more light weight and adaptive, then why RSC rather than making the frontend a SPA? Even if you use NextJS to write SPA it would looks much reasonable to have both NextJS and Hono used together.
It's crazy that NextJS became the default new developer tooling.
It's a bit unfortunate that whoever can scream the loudest with their marketing to get people to git clone their framework tends to grow the fastest.
There are so many layers of abstraction that are simply not necessary in most frameworks and that risk over complicating or under complicating many products.
It's almost like blindly defaulting towards any opinionated solution is not what we should be teaching people to do... /Sarcasm
>This is because when Next.js loads the actual Server Component, no matter what, the entire page re-mounts. I was begging Next.js to just update the existing DOM and preserve state, but it just doesn't.
YES! YES! I FEEL SO SEEN RIGHT NOW! I find this behavior unbelievably frustrating. It's hard for me to understand why they ever even shipped RSC's without fixing this.
Ooeeff.. Have been thinking to switch from pages router to this. But this kinda defeats the purpose
Huh, mind clarify a bit? I don't think I have experienced this, but maybe I'm missing something.
The issue is everyone's optimizing for blog post metrics, not actual problems. "Look at this new pattern!" gets clicks. "We kept it simple and it just works" doesn't. Same thing happened with microservices - everyone rushed in because it sounded cool, then spent years dealing with distributed systems hell.
I use the Pages router. Doing it this way make sense to me in a way that SPAs never did.
Tanstack lack of back button handling is infuriating. Throwing away users data is simply not acceptable.
Is Wouter better in this regard?
Elaborate? Back button handling, in what way? I've not used tanstack thus far
The biggest facepalm moment I had was when we switched Levels.fyi from gulp.js to next.js. Our pagespeed, hosting costs, etc all took a significant hit. We're experiencing the same issues as described in the post and weighing our options to transition as well. Avoid next.js / vercel at all costs.
What did you end up doing? Are you still on nextjs? Big fan of levels.fyi btw. Thanks for your work
Appreciate it! We're still on nextjs. Will def put a blogpost together as we optimize / move away. Thankfully, AI makes large-scale mostly repetitive migrations like these much simpler.
Where would you prefer to deploy now?
Anybody passing by please share too
Tanstack, last I checked, doesn't even support RSC.
So?
Hey anyone tried block0hunt before?
I'll try to review the article with comments to make this a more critical discussion instead of just hates on Next.js (I'm just a Next.js developers for years now and am quite happy with it - but I do agree it requires some deeper understanding)
> React is now using the words "server" and "client" to refer to a very specific things, ignoring their existing definitions. This would be fine, except Client components can run on the backend too
It was hard discussions to come up for naming of these things in the beginning. Even calling them "backend" and "frontend" (as they suggest in article) wasn't clear about their behavior semantics. I understand the naming annoyances but it's a complex issue that requires lots more thought than just "ah we should've called it like this"
> …This results in awkwardly small server components that only do data fetching and then have a client component that contains a mostly-static version of the page.
> // HydrationBoundary is a client component that passes JSON > // data from the React server to the client component. return <HydrationBoundary state={dehydrate(queryClient)}> <ClientPage /> </HydrationBoundary>;
It seems they're combining Next's native hydration mechanism with TenStacks (another framework) in order to more easily fetch in the browser?
To follow on their WebSocket example where they need to update data of a user card state when a Websocket connection sends data. I don't see what would be the issue here to just use a WebSocket library inside a client component. I imagine it's something you'd have to do to in any other framework, so I don't understand what problem Next.js caused here.
What they're doing screams like a hack and probably the source their issues in this section.
> Being logged in affects the homepage, which is infuriating because the client literally has everything needed to display the page instantly
I'm not sure I understand this part. They mention their app is not static but instead is fully dynamic. Then, how would they avoid NOT showing a loading state in between pages?
> One form of loading state that cannot be represented with the App Router is having a page such as a page like a git project's issue page, and clicking on a user name to navigate to their profile page. With loading.tsx, the entire page is a skeleton, but when modeling these queries with TanStack Query it is possible to show the username and avatar instantly while the user's bio and repositories are fetched in. Server components don't support this form of navigation because the data is only available in rendered components, so it must be re-fetched.
You can use third-party libs to achieve this idea of reusing information from page to another. Example of this is motion's AnimatePresence which allows smooth transitions between 2 react states. Another possibility (of reusing data from an earlier page) is to integrate directly into Next.js new view transitions api: https://view-transition-example.vercel.app/blog <- notice how clicking on a post shows the title immediately
> At work, we just make our loading.tsx files contain the useQuery calls and show a skeleton. This is because when Next.js loads the actual Server Component, no matter what, the entire page re-mounts. No VDOM diffing here, meaning all hooks (useState) will reset slightly after the request completes. I tried to reproduce a simple case where I was begging Next.js to just update the existing DOM and preserve state, but it just doesn't. Thankfully, the time the blank RSC call takes is short enough.
This seems like an artefact of the first issue: trying to combing two different hydration systems that are not really meant to work together?
> Fetching layouts in isolation is a cute idea, but it ends up being silly because it also means that any data fetching has to be re-done per layout. You can't share a QueryClient; instead, you must rely on their monkey-patched fetch to cache the same GET request like they promise.
Perhaps the author is missing how React cache works (https://react.dev/reference/react/cache) and how it can be used within next.js to cache fetches _PER TREE RENDER_ to avoid entirely this problem
> This solution doubles the size of the initial HTML payload. Except it's worse, because the RSC payload includes JSON quoted in JS string literals, which format is much less efficient than HTML. While it seems to compress fine with brotli and render fast in the browser, this is wasteful. With the hydration pattern, at least the data locally could be re-used for interactivity and other pages.
Yes sending data twice is an architecture hurdle required for hydration to work. The idea of reusing that data in other pages was discussed before via things like AnimatePresence.
What's important to note here is that the RSC payload exists at the bottom of the HTML. Since HTML is streamed by default this won't impact Time-to-first-Render. Again, other frameworks need to do this as well (in other ways but still, it needs to happen)
I totally understand the author's frustrations. Next.js isn't perfect, and I also have lots of issues with it. Namely I dislike their intercept/parallel mechanism and setting up ISR/PPR is a nightmare. I just felt like the need to address some of their comments so maybe it can help them?
As a first I would get rid of tanstack since it's fighting against Next.js architecture.
Or yeah just move entirely elsewhere :)
> but I do agree it requires some deeper understanding
You are agreeing with who?
"large disagreements about fundamental design decisions" is not a lack of understanding. NextJS is the problem.
...or just not use server components :)
I'm surprised so many people drank the RSC koolaid. I tried it for maybe an hour and it became painfully obvious very quickly how much harder it is to build something that used to be simple.
I just don't understand the use-case either.
Either you're building an SEO-optimized website and you want that initial page load to be as fast as possible. In this case, just build a static website. Use whatever technology you desire and compile to HTML+CSS.
Or you're building an "app" in which case you should expect users to linger around for a bit and that fat initial payload will eventually be cached, so you really don't need to sending it down on every click. So go full-on with the client-side rendering and simplify your stack a little. You can still do a lot of optimizations like code-splitting and prefetching and this and that, but we don't need this weird mixed modality where some things work in one place but not the other.
Which is pretty much what the author says and I'm glad to see people start to realize this.
> Or you're building an "app" [...] So go full-on with the client-side rendering
I wish companies would take this a step further still and just build a PWA. This gives you access to so many web APIs that can further simplify your stack.
I agree that it's bewildering to see how many companies reach for Nextjs for webapps that don't need SEO optimization but some of the more complex rendering strategies can still be useful for web apps as well. Even for PWAs
If you're building an SEO-optimized website you don't even need to build a static website. Just SSR it like normal (you don't even need streaming) and just chuck a CDN-Cache-Control header on there. You'll get responses in 10s of ms.
and watch your code be 10x easier to reason about.
... oh wait that's what the author ended up doing LOL
have you tried Nuxt.js
Nuxt is amazing, but it was recently bought by vercel, so we’re all anticipating slow enshittification of it to match next.js
Homeroll Vite plus Express and call it a day.
I've used the app router and it is fairly nice once you understand how all of it works. But its a bucket load of complexity nobody really needs. A normal react SPA using vite + wouter + react-query works brilliantly.