Тёмный

Are React Server Components Really Slower? 

Jack Herrington
Подписаться 185 тыс.
Просмотров 28 тыс.
0% 0

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 204   
@StingSting844
@StingSting844 Год назад
I work on a B2B product which has a react SPA for the main portal. It works fine and i was looking at simplifying it for scale as the CRA webpack config is reaching its limits. It s already split up into microfrontends using Lerna. I tested out Next and Astro. The results were wildly different. Even though Astro is meant for sites it was a great fit for us. As a pilot project we converted a small CRA app to Astro and people immediately appreciated how fast it loaded. Plus an important benefit of astro is that we were able use a complex vue component directly in our page with very little change. We want to move towards something that embraces multiple options for frameworks rather than be stuck on React. Astro is insanely good for b2b apps. Its a shame they don't market it that way
@thiccboi6211
@thiccboi6211 Год назад
Oh nice! Why was the vue component necessary instead of React? Im sure there would have been plenty of options in React to do that.
@StingSting844
@StingSting844 Год назад
@@thiccboi6211 that's a good question. We are an acquired product in the current company and we are required to integrate a couple of products into ours. Other teams use Vue and Angular for their UI. We figured out that the complex custom component that we needed to integrate was already written by the Vue team with good types and test coverage. So we just integrated it into our product cause it was a trivial task in Astro. Also Vue was much easier to learn than React. Rewriting in React was pointless.
@thiccboi6211
@thiccboi6211 Год назад
@@StingSting844 Oh gotcha. I'm sure the management was thrilled 😂
@albertgao7256
@albertgao7256 Год назад
SSG your before-auth, and SPA your after-auth part with route splitting and concurrent React to prevent waterfall, then you are golden. Beat most SSR. For SPA bundling, just use Vite.
@brianmcbride1631
@brianmcbride1631 Год назад
Astro is great. There is some work being done on an opt-in client side router. It looks like it will do a DOM diff and only replace the islands that are updated from the server, not the whole DOM. That is really, really interesting. Best of both worlds. Astro is one to watch
@AtilaDotIO
@AtilaDotIO Год назад
Damn it's a good video!! Great pace, well collected data, and very unbiased conclusions! Love it and already recommending it everywhere! Thanks for doing this, it was clearly lots of work to crunch all of it in 13 digestible minutes!
@SantiagoEsteva-z5g
@SantiagoEsteva-z5g Год назад
Jack, Thank you for this meticulous comparison. It would be amazing to continue the comparison including Remix and adding webpagtest or lighthouse image strips to "measure" user speed perception
@BeyondLegendary
@BeyondLegendary Год назад
Impressive, very nice. Let's see Paul Allen's comparison.
@-maddhruv
@-maddhruv Год назад
Which?
@marshmallow8709
@marshmallow8709 Год назад
@@-maddhruv you don't watch Paul Allen? wow.
@greidinger-reis
@greidinger-reis Год назад
Lmao this dude don't watch Paul Allen's comparisons
@-maddhruv
@-maddhruv Год назад
It is not necessary that everyone is watching what you are watching, if it is something worth sharing - share otherwise shut your ***
@greidinger-reis
@greidinger-reis Год назад
@@-maddhruv ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-cISYzA36-ZY.html
@runonce
@runonce Год назад
Great content! Thank you so much. I believe that RSC offers some other benefits apart from streaming. For example, getServerSideProps can't be used outside of a page while RSC allows you to run whatever async task you need in any React component.
@jherr
@jherr Год назад
Well, only during server render. You can’t use those async components on the client. So it’s really just moving the async code required for initial page render from getServerSideProps into the RSCs.
@runonce
@runonce Год назад
@@jherr This allows you to have a global navbar coming from a CMS for example, while preserving the state during navigation. This is something I couldn't find a way to achieve using the old Pages routes (there are some workarounds but they seem too hacky for me).
@jherr
@jherr Год назад
@@runonce Huh, I've worked on several big NextJS apps and I've never seen issues with the Navbar cycling. Are you using next/link to do the Navigation? That should do swap out of just the content.
@codingjitsu
@codingjitsu Год назад
This was insightful, Thank you for making this video.
@hamedmatari2577
@hamedmatari2577 Год назад
you do not believe how happy I am when I get the notification for new video , thank you jack
@-maddhruv
@-maddhruv Год назад
Very nice analysis ❤
@obamer1342
@obamer1342 Год назад
RSC + nextjs makes my code confusing as hell. In my app now i have to think about 3 different react components, RSC, client components SSR, and client components in the client. Its easy by itself but when you try use it with other libraries like @apollo/client i need to have my provider for RSC and a provider for client components, and if you're working with cookies now you have to have different logic in client and server, in addition to tunning all the optimizations next adds like the revalidation of individual routes. At the end my app's code feels and looks like frankenstein. A mix of everything . . .
@antonychiramel80
@antonychiramel80 Год назад
I'm convinced to learn Rust😅
@VoxyDev
@VoxyDev Год назад
Have you ever worked with text editor libs like Slatejs, Draftjs, or Quilljs before? I wonder If u bandwidth to do a video comparing them.
@jherr
@jherr Год назад
I have. I wasn't particularly thinking about doing a comparison video. I personally tend to find the most popular one on the day. One that has the features I need, that can be reasonably styled and plays nicely with React.
@brucegenerator2755
@brucegenerator2755 Год назад
oof. we are considering migrating our codebase to the app dir and now I'm wondering if we can afford the performance hit. repos fairly massive, CMS-driven and serves dozens of websites for our client 🤦‍♂️
@jherr
@jherr Год назад
Definitely worth running the numbers on. That being said, how long will pages compatibility last? Tough call.
@JGBSolutions
@JGBSolutions Год назад
Yes it's slower. Do they have any plan to fix that? That was a bummer for me for a new version of an app I built.
@akosbalint3485
@akosbalint3485 Год назад
As always, thank you for the video and the topic. The response times are very disappointing. Vercel/Next/React should make more effort to speed up his framework. I am curious of measures in the same scenarios with SolidStart.
@jherr
@jherr Год назад
That would be a fun comparison. I'll do it.
@akosbalint3485
@akosbalint3485 Год назад
@@jherr As I read it supports SSR and streaming SSR as well. So the old and new next app directory can be compared also.
@aravindm6124
@aravindm6124 Год назад
I feel bad that i couldnt understand these comments which are in depth , i would really appreciate if anyone can give some tips to get into depth of these topics
@Dev-Siri
@Dev-Siri Год назад
I think this is because next 13 does more work on the server, so the server is slower than the pages impl which does a lot of work on the client. And app router is very new while the pages router has had a lot more time to bake in the oven
@jherr
@jherr Год назад
I'm not sure I understand. In both cases; getServerSideProps and RSCs, all the work and rendering is done on the server. In both cases any "client" components are actually run on both the client and the server. The difference here is that after the application is rendered there is some more overhead time with the App Router that has nothing to do with the rendering of components or requests to services, since that has already all been done. But it is correlated to the size of the DOM. So whatever it is has something to do with the number of tags on the page.
@Dev-Siri
@Dev-Siri Год назад
@@jherr idk why youtube is removing my comments. I tried to comment like 2 times and they both got deleted. I am unable to reply to you here now so I guess I can't clarify my statement. Maybe this comment will also get deleted before you are able to see it.
@notted749
@notted749 Год назад
Hey Jack, in your CDN example you mentioned this won’t have any impact, but what effect does it have on hydration times? Are they the same or different in any way please? As a professional user of Nextjs I am increasingly finding they are not thinking of enterprise and cached pages with the direction they are taking react/Nextjs and it feels frustrating at times.
@jherr
@jherr Год назад
AFAIK hydration should be symmetric between CDN and non-CDN since all the hydration data is in the page payload, and the JS bundles are going to be both CDN deployed and browser cached. You'll get the page contents faster with a CDN, but the browser hydration will take just as long.
@notted749
@notted749 Год назад
@@jherr Hey Jack, sorry I meant differences in hydration times between classic Get Server Side props and RSCs. Are RSCs doing more magic that causes hydration on the client to be slower than classic pages using GetServersideProps, which is just reading data from the returned html and passing it to React Hydrate? So in the case of using a CDN to cache the SSR and the TTFB being equal are RSC still slower performance?
@hatrer2244
@hatrer2244 Год назад
You are misunderstanding RSC. You can have a server component with a huge list of imports that do not need to be sent through the network. This can speed up initial load for the user.
@jherr
@jherr Год назад
So we get a smaller bundle with RSCs, ok. Not that I have seen that, but conceptually it tracks. How much smaller? If a bundle for a React app is 70Kb, how much of that is React? More than likely the vast majority of that bundle is the framework, which puts a lower bounds on how much improvement you can get out of client side optimization. Certainly there is some optimization there. But the vast majority of the improvement in initial load is going to come from streaming with RSCs. That's the trick though, you have to use streaming. Your backend data sources have to be independent of each other so that you can block on critical data sources, and stream slower data sources. Then you can see big improvements. I'm not saying that RSCs are bad. I just to dispell the idea that a port to RSCs will immediately lead to better performance, it won't. And initially it might actually be worse performance.
@hatrer2244
@hatrer2244 Год назад
@@jherr You are right, everyone should not just start porting their old pages to RSC and expect huge performance gains. Here are some examples of packages that could benefit from RSC in comparison to React: react@18.2.0 BUNDLE SIZE 6.4 kB MINIFIED 2.5 kB MINIFIED + GZIPPED @apollo/client@3.7.16 BUNDLE SIZE 150.1 kB MINIFIED 43.7 kB MINIFIED + GZIPPED contentful@10.3.1 BUNDLE SIZE 75.6 kB MINIFIED 23.9 kB MINIFIED + GZIPPED react-datocms@4.1.3 BUNDLE SIZE 43.2 kB MINIFIED 13.4 kB MINIFIED + GZIPPED
@vitorfigueiredomarques2004
@vitorfigueiredomarques2004 Год назад
<a href="#" class="seekto" data-time="645">10:45</a> Couldn't it be due to the fact that your pages are being cached in the CDN?
@jherr
@jherr Год назад
The CDN is consistently slower for RSCs though? CDN cache speed should be a constant across any architecture since the origin is only hit once (depending on cache settings) and then from there on out it's just the CDN speed.
@vitorfigueiredomarques2004
@vitorfigueiredomarques2004 Год назад
@@jherr As I said in another comment, this could be explained by the fact that nextJS app router adds the json representation of the virtual DOM at the end of the HTML file. And larger HTML needs more time to download even in a CDN.
@jherr
@jherr Год назад
@@vitorfigueiredomarques2004 Ok, yeah, actually. Sorry I didn't connect it with the CDN angle the first time you mentioned it. That's probably it.
@virendrapatel775
@virendrapatel775 Год назад
I found memory leak with 13.4 Then we downgraded the version to 13 all seems good in production application.
@judegao7766
@judegao7766 Год назад
Is the CDN cache per user/per app? I feel like we can’t really use CDN per app because different users have different contents/languages.
@jherr
@jherr Год назад
Different languages can be handled by CDN but different users, not unless you get clever about it.
@trappar_og
@trappar_og Год назад
Seems to me that by trying to test apples to apples with identical features on both sides, you’re actually missing everything that the app router can optimize. The app router can cache API responses on a per-request basis where the pages router can’t. I also wonder if the slowdown server-side is offset on the client side since there’s less hydration to do and less JavaScript to load. I wonder if LCP is lower on an app router page with a significant number of tags if they are all rendered on the server since there is nothing to do once the client receives the page. Clearly the app router is going to hit servers harder though, and that sucks.
@jherr
@jherr Год назад
I have been better TTI with App Router because the server worked harder to pre-compute the VDOM and the client benefit from that. It's an interesting tradeoff. Just thinking about this now it seems like that tradeoff is betting on client performance to stall or decrease over time, when the reality seems to be the opposite, that mobile and desktop clients are improving their performance. I do wonder why the persisted VDOM needs to include output from the client components as well as the RSCs. Next/React clearly know that the component is a client because the props are persisted as well. Why don't they just stop the tree travesal there and send that as the start of the VDOM to the client and then run the client components on the client to hydrate the rest of the DOM.
@trappar_og
@trappar_og Год назад
SPAs made so much sense to me for so many reasons. Why compute anything on the server? Just let the client do all the work. With code-splitting and edge caching it’s an extremely cheap model to distribute computation to the client that works plenty well enough. The problem is that it banked on clients/networking being fast enough that no one would care about the minor optimization issues, and on search engines figuring out how to index JS only pages in such a way that SEO would not be affected. Evidently we’re not living in the world where those hopes came to fruition, so the pendulum swung the other way. It’s strange just how far it swung though. The pages router already solved these problems, and I have a hard time believing that RSC was necessary. Maybe eventually the benefits of this new model will become more clear and the drawbacks will be ironed out. I do have a fairly common example that shows what the app router can do efficiently that the pages router can’t though - A page which has both content coming from a headless CMS and also requires cookies to determine what gets rendered server-side (feature flagging/experiments for example). In such a case, the page must be fully dynamic due to the server-side cookie usage, but there is no need to repeatedly fetch the CMS content. The app router automatically optimizes this case very well where the pages router doesn’t. It would be very interesting to see benchmarks on a case like that!
@jherr
@jherr Год назад
@@trappar_og I am so with you. I want to know more about what you are thinking on this CMS example because I really would like to show App Router just kicking ass. So is the idea that we load the auth info first, build that layout (i.e. header/nav) and then suspense/stream the CMS content. And it's better because we aren't blocking on both before rendering?
@trappar_og
@trappar_og Год назад
@@jherr no, that’s not what I’m talking about, but maybe there are potential areas for optimizing with app router that involve streaming too. What I was talking about is much simpler. Simply put, there are probably lots of cases where the app router’s more granular caching will make for faster apps than the overly centralized and inflexible caching offered by the pages router. For example, let’s take away the whole CMS concept just to keep it simple and say we have a page that requires us to load some data from an API and that API takes 250ms to load a response. With the pages router, let’s say we’re making the API query inside a getStaticProps function that returns `revalidate` to keep the data relatively fresh. So have a basic ISR page and when a user requests the page it can be served directly from the cache. As a result the 250ms that the api takes has no impact on the time it takes for our server to respond to a client. The 250ms would only ever really be observed during the initial build. I believe this works very similarly with the app router using segment level caching (I haven’t actually built something on this just yet). Or we could also just cache the api response itself - or even both. In any case we again shouldn’t expect the 250ms api request to come into play for users. But what happens to these pages when there is some piece of data needed to customize the page for a specific user? This could be due to auth, an experiment which depends on a flag set in a cookie, or any other situation where the response depends on data from the incoming request. For the pages router, we have to switch from getStaticProps to getServerSideProps so we can access data from the request, but in doing so we lose the ISR caching that existed before. Now on every request it has to go fetch our API data despite that this data isn’t necessarily expected to change, and each user’s request now takes a minimum of 250ms longer to complete. With the app router on the other hand, the API result cache is totally independent from anything else. Even if the page must now be dynamic and we have to throw away the segment level caching, we can still keep using the same cached copy of the API result and that 250ms will not come into play for a user. So I’d expect that based on your findings in this video the pages router would perform better when it’s simply two static pages, but the app router will win as soon as the pages have to be dynamic. That’s just my theory though. At this point I’ve mostly just read the docs and I’m still a couple weeks away from starting to test some of this out for the company I work for! I could be totally wrong! 😅
@jherr
@jherr Год назад
@@trappar_og gotcha. Yeah, I’d have to run the numbers to see how impactful that is but that’s one of the more direct app router scenarios that I’ve heard.
@shayanalijalbani9894
@shayanalijalbani9894 Год назад
Yes
@jikaikas
@jikaikas Год назад
wow then why do we use nextjs
@MrPlaiedes
@MrPlaiedes Год назад
I'd be disappointed if Next deprecated pages.
@MatthewDeaners
@MatthewDeaners Год назад
I don't understand this comparison. Yeah, obviously rendering the tag on the server is slower, but isn't the point that the TTI (time to interactive) should be faster? RSCs are dumb, but I immediately felt like you didn't understand what you were testing.
@MatthewDeaners
@MatthewDeaners Год назад
I got to the end of the video and was just confused how the whole video misses the point. What matters is perceived speed by a user, not requests per second of the server.
@jherr
@jherr Год назад
That's certainly a factor, and yes, I have seen some TTI benefits with App Router because of the VDOM starter that is frozen at the end of the page and used for faster hydration (but not zero time hydration). Whether or not that outweighs the page render time and bandwidth cost of the bigger page, it's not clear.
@lucaszapico926
@lucaszapico926 Год назад
Hey Jack, I really appreciate your comparison, I just started migrating some Next12 apps to Next13 and some of the transitions are hard for me to see the value to the point in strongly considering jumping to Remixjs for future projects. As of right now I feel a little jerked around by Next and the fact that the Next12 docs just disappeared bothers me. Your analyses are a huge value to me and I really appreciate that you take the time to make these. 🙏
@sushantrajbanshi4508
@sushantrajbanshi4508 Год назад
With introduction of *Signals* & *Standalone components* on V16, and the current state of React/Next 13, I'm even considering to move to Angular at this point.
@naregtokatlian2869
@naregtokatlian2869 Год назад
next12 docs are still there, check the dropdown on the left It took me a while too.
@codekaze
@codekaze Год назад
I switched to SvelteKit a few weeks ago because of all the shenanigans going with Next13 and React. And holly shit. The difference in performance and DX is HUGE. No Virtual DOM. Variables are reactive by default. Extremely small bundle size. Transition and animation support. Small ecosystem? You can use ANY JavaScript libraries out there instead of looking for the ones that only start with "react-".
@dave6012
@dave6012 Год назад
This is important investigation. A nice way to compare UX could be to perform the same stress test in the browser and recording web vitals. We might see the virtues of router more clearly.
@Cahnisama
@Cahnisama Год назад
RSC is really puzzling everyone
@netssrmrz
@netssrmrz Год назад
I've been doing web dev for over 20 years and from my point of view SSR was done and dumped. I'd love to see someone do a piece on why this time around it's better. Pros and Cons. Not the usual React/NextJS Kool-Aid.
@mr.g937
@mr.g937 Год назад
The alternative being what? Standard CSR?
@ikbo
@ikbo Год назад
If you have 20 years of dev experience and you still don't understand why ssr is fast you should change careers?
@albertgao7256
@albertgao7256 Год назад
@@ikbo it always “depends”, as CSR could simply beat SSR to death for things like FCP and TTI. SSR has its merits, but never a silver bullet. Where did you leant all the wrong information?
@brianmcbride1631
@brianmcbride1631 Год назад
@@ikbo It is context. Poor internet connections, SSR suuuucks if you have to fetch the full dom on every route. If you are pulling data from APIs, then it makes it hard to preload pages, since that data could be extremely stale then. Having an state engine that calls APIs with as little overfetching as possible results in the fastest dynamic data updates. Far faster than SSR, once your site is running. Of course, first load is the slow part. But, it is context. A news page, blog site, marketing site. SSR, sure. If you have a site or an app where you expect users to stick around and follow many routes, SSG + CSR and APIs for dynamic data will be way more performant. And, of course, if you want your web app to work well in poor internet connections, then SSR just isn't a great solution as it is currently implemented in NextJS
@netssrmrz
@netssrmrz Год назад
@@mr.g937 You've obviously made up your mind that SSR solves everything so all I can say is, good for you.
@ИгорьБаданюк
@ИгорьБаданюк Год назад
Thanks Jack, that's a great comparison. I've been studying and gradually migrating the application for over half a year now, some things are good, Metadata API, Routes and overall structure but some are disappointing. For example the server components are cool, but they don't work with CSP because they are inline scripts and there is no solution for nonce or script-hash. I also noticed a degradation in the speed of dynamic component imports compared to Pages Router. If you use dynamic import for a component that renders more than 50 times you get a Maximum update depth error for no reason.
@adimardev1550
@adimardev1550 Год назад
performance is a huge deal. i also notice that in my development. nextjs 13 slows me up to 70% during development. I'm not even implementing data fetching. i was just designing jsx and it was slow. i thought it was my computer but now you made it crystal clear that it's really nextjs 13. not only that. next js 13 breaks a lot of standards as if it's no longer agnostic. i found it painful integrating with libraries specillay when dealing with states and optimizations. and nextjs 12 docs are just gone forever.
@vitorfigueiredomarques2004
@vitorfigueiredomarques2004 Год назад
I could be mistaken, But one thing I notice is that nextJS 13 app router adds the virtual DOM json representation at the end of the page as a script tag. So it could be thst this difference is due to the fact that app router creates larger html, which would take more time to download. What could mean that the problem is not exactly with the server component creation itself but with how it is served. Of course this means that pages directory can have a lower First Contentfull Paint (FCP) because it serves the HTML faster, but I think it's not a complete performance analysis. It could be that Server components trades the FCP to a lower TTI (time to interactive), remember that one of the advantages of server components is that browsers will parse and execute less javascript to create the virtual DOM. So I think that you should measure the time between the request and the moment that the web browser finished running all the essential javascript and finish creating all the virtual DOM (you could use some headless browser to do that).
@electrolyteorb
@electrolyteorb Год назад
This is actually eye opener for me🙏🙏 i was so obsessed with app router and RSC, that Everytime I get a heartattack whenever I need to put "use client" in a file😂😂😂 (i know it differs from both RSC and GetSSP()...)
@leularia
@leularia Год назад
Fr its slow slow af
@iansmith3301
@iansmith3301 2 месяца назад
The flat lining performance graphs in Vercel to me indicate that the underlying hardware that is running your code is being throttled, e.g. you're hitting an IO/CPU credit limit so the threads for the node process are allocated X amount of performance; or it could be a throttle from their firewalls
@ryanshaul4942
@ryanshaul4942 Год назад
Love the vid Jack! Do you have any plans to check the lighthouse scores / client side JS payload sizes etc of the 2 models? While this is great to know, I think that matters a bit more than the raw requests/sec metric.
@jherr
@jherr Год назад
Lighthouse scores on anything non-trivial are going to be 100%. And the JS payload sizes aren't going to be much different since the bulk of the payload is in the libraries and not in the user code on top of it. And even then in the case of RSCs, it's only the RSCs which will be removed from the bundle, again, not a huge savings unless the RSC implementations are huge. Actually there is one big difference in metrics is going to be the size of the returned payload. Routes that use getServerSide props only send along the data from the getServerSideProps as the hydration payload. Where in the RSC model they freeze and send the entire DOM as JSON along with the HTML, which can be significant in size. This has the advantage of reducing the hydration startup time on the client, but at the price of more bandwidth costs. This is going to sound like I'm uniformly against the App Router, I'm really not. I just want to make sure that folks understand that the DX advantages don't come without costs.
@huzaifac137
@huzaifac137 Год назад
Server components take 3x nore time for me to fetch data compared to traditional react approach to fetch data from node server
@Erro14
@Erro14 Год назад
Really appreciate this comparison. I was very hyped about RSC & app router, but im getting more & more sceptical about it. It would be nice to also see the comparison of streaming SSR vs partial SSR for main layout & fetching the data on client side vs full SSR.
@DanteMishima
@DanteMishima Год назад
I was skeptical from the onset, and after testing it my suspicions were confirmed. I'm sticking to the pages method - it's not broke, it needs no fixing
@glekner
@glekner Год назад
Thanks! Very informative. I wonder whats the optimal balance between DX and performance
@jherr
@jherr Год назад
Thank you so much for the support!
@eleah2665
@eleah2665 Год назад
First in!
@lukasmolcic5143
@lukasmolcic5143 Год назад
It seems like there could be two reasons for this, the first being this is a new paradigm and there is still some room for them to do some optimizations behind the scenes to catch up with the old model, the other option would be that the new model just by its design needs to do extra stuff and it will always be slower, would be nice if we could actually get someone from Vercel to address this. On a semi related point, I am still confused about the streaming part, if I need a page for SEO can I stream it with loaders for the initial load, that doesn't sound right, if I don't need seo then rendering on client seems to make much more sense, what would be a use case for server side streaming rendered html
@Dontcaredidntask-q9m
@Dontcaredidntask-q9m Год назад
new paradigm?? this is how the web worked 20 years ago
@jherr
@jherr Год назад
The streaming stuff is relatively new. We’ve had keep alive connections for a while but I don’t recall any frameworks having built in streaming a la NextJS/Remix. There were frameworks that would stream the head and then stream the body in chunks. Which improved TTFB.
@regibyte
@regibyte Год назад
Hey Jack I know your focus is on react but since everyone is comparing RSC with PHP, could you make a video comparing something like laravel-livewire or inertia with NextJS? It would be really interesting. Awesome video btw
@jherr
@jherr Год назад
I've been looking to do a hotwire video for a while. I was thinking about covering it in Rails, since I think that's where it comes from original. But definitely there is a lot of buzz around PHP again, now.
@PeerReynders
@PeerReynders Год назад
@@jherr 2018-09-07: Phoenix LiveView (ElixirConf 2018 keynote) 2019-01-15: Laravel Livewire (Caleb Porzio tw.) 2020-06-24: Rails Hotwire (DHH tw. referring to as NEW MAGIC) In fact in the Elixir community there was an earlier ("remote control" UI) framework called Drab. Drab's author stated that he was inspired by nagare (Python) and Volt (Ruby).
@jherr
@jherr Год назад
@@PeerReynders well ok then. I am wrong. Thanks for the info.
@PeerReynders
@PeerReynders Год назад
@@jherr Sorry, this wasn't about right or wrong. As it tends to be the case I think Rails Hotwire just had the most popular exposure while each community tends put forward "their version" as the groundbreaking one.
@jherr
@jherr Год назад
@@PeerReynders yeah, true enough. I should have said that the first I heard of it was with rails. That would have been more accurate.
@rajaark5643
@rajaark5643 Год назад
Greater efforts big applause to you.
@Zaber123
@Zaber123 Год назад
If you need to rely on streaming and showing skeletons at your suspense boundaries, haven’t you lost the biggest SSR advantage of reduced cumulative layout shift?
@jherr
@jherr Год назад
That's certainly an advantage. Another advantage is requests to microservices off the server as opposed to the client. But you get that in either scenario here.
@rand0mtv660
@rand0mtv660 Год назад
Nice comparison. Even though app router is slower, I still cannot imagine what kind of an app would have to serve 300 page requests per second because that's a lot and I expect at that point you have a lot of infrastructure running your app anyway. Of course, if app router could serve 1000 requests per second it would mean you require less horsepower which means lower bills of course. If only we could use Rust for server side rendering React instead of Nodejs, I think this would be crazy fast.
@jherr
@jherr Год назад
There is bun.
@damianjanus1990
@damianjanus1990 Год назад
Do you test the same way Fresh freamwork from Deno?
@DanteMishima
@DanteMishima Год назад
I noticed this when I was testing a few days ago
@Thorax232
@Thorax232 Год назад
When you work with a real app, this makes it much easier to work with ISR. So I have two apps that are much faster. However, everything else doesn't seem to work right. revalidateTags() and revalidatePath() don't seem to do anything at all. They don't cache bust which means if you want to do static pages that are rebuilt via user interaction, you're out of luck. Which means you're stuck with plain SSR without hydration, which we all know is slower. It not that server components are somehow magically "just slower" it's that a lot of the caching features look to be undone and/or broken. In deployment I have one API route showing as "ISR" under summary and I have no idea why. I couldn't get anything else to be "ISR".
@VoxyDev
@VoxyDev Год назад
Not speaking of getStaticProps with revalidation. It's much faster than App Router in many ways.
@shivanshubisht
@shivanshubisht Год назад
<a href="#" class="seekto" data-time="638">10:38</a> you are getting a flat line on the deployed version because vercel/netlify uses aws lambda (serverless functions). thus they could scale infinitly because load can be distributed among different lambdas/new lambdas could be created also you local machine is on decline because it is running on a single instance(like ec2), therefore it would only be able to handle limited amount of traffic/throttle as it can't scale like serverless functions
@jherr
@jherr Год назад
ok, but why are they capped at different numbers? or capped at all?
@shivanshubisht
@shivanshubisht Год назад
@@jherr all lambda's get capped at a fixed memory size (default for vercel free tier is 1024mb and timeout of 10seconds) however if you are deploying it directly to aws using terraform/sst you can customize timeout(max 900seconds) and memory size to whatever you want
@jherr
@jherr Год назад
@@shivanshubisht Yeah, we tried deploying NextJS directly to AWS lambdas using serverless and the cold start time was absolutely terrible. Have you found a good solution for that?
@shivanshubisht
@shivanshubisht Год назад
@@jherr also you had coldstarts as i guess you weren't using cdn for caching (like aws cloudfront) with lambdas. sst handles all of it for you using cloudformation.
@matej-world
@matej-world Год назад
If web user is ‘far away’ from server, then api calls from user’s location will take significantly more time, because of latency. It would be significantly faster to perform all api calls all server side (low latency) and then deliver (stream) everything to client in one go. That would be significant if there are multiple api calls on a page.
@jherr
@jherr Год назад
That's the same in either case though. getServerSideProps is run on the server. RSCs requests are run on the server.
@matej-world
@matej-world Год назад
@@jherr I had a case where I had to run potential auth refresh on each route ended up adding logic to _app's getInitialProps, which doesn't always run on server. I wanted to avoid adding logic to each page's getServerSideProps ... at least for this case app router would make sense ... haven't implemented yet though :)
@Crevulus
@Crevulus Год назад
Video request 🙋🏻 How to decouple your logic layer from your ui layer react components. I feel like something that will help mediors get to senior level is understanding these kinds of architectural decisions. They come up more frequently than which technologies to use or which new features of a technology to use, in my experience. It will also help us to understand the architectural decisions that have been made before we join a company, i.e. when we I herit a codebase. Most new feature/new tech tutorials on RU-vid just introduce the concepts of the new tech/feature in the most basic way, e.g. throwing all the logic straight into the component. This isn't truly reflective of the experience we actually get working on a production codebase.
@MegaJehanzaib
@MegaJehanzaib Год назад
True, as a Jr. React dev, it becomes really convoluted separating logic from UI in react. Most of it is due to JSX I think. Angular atleast, doesn't suffer from this problem since it cleanly separates styles, html and js business logic
@jherr
@jherr Год назад
How much do you like monorepos?
@Crevulus
@Crevulus Год назад
@@jherr so far I'm still unaware of it's advantages Vs micro FEs (or whatever other alternative). I've seen microservices for BE, but not really for FE. I have seen most companies go with monorepos on FE by default, and everyone get all hype about alternativs for FE but then never really employ them. Now, after that Amazon article about switching back to monorepos, people seem to be hype about them again 🤷
@jherr
@jherr Год назад
@@Crevulus this isn't really microservices, it's just using a monorepo to easily refactor your application logic into external packages that can be independently tested and used by multiple applications.
@Crevulus
@Crevulus Год назад
@@jherr This is the kind of stuff I (and a lot of us at mid level) need to learn 🤩
@aralroca
@aralroca Год назад
"RSC" is a paradigm shift that appears to be the solution for reducing the amount of JavaScript required on a web page. However, even if you only use a single client component, using RSC will result in more JavaScript being loaded than if you were to use Preact instead 😅
@jherr
@jherr Год назад
I'm not sure there is any optimization if you don't have any "use client" components. I'm pretty sure there is still a React bundle sent to the page and hydration from the JSON frozen DOM is still being done, even if there are no client components. IMHO react-based framework are never going to beat preact on bundle size.
@oscarljimenez5717
@oscarljimenez5717 Год назад
RSC is a new paradigm shift, but the focus is not reducing the amount of JS in the React bundle size (~50kb). Is about waterfalls and composition.
@Zaber123
@Zaber123 Год назад
@@oscarljimenez5717and this is why I remain unconvinced. You can get very similar composition and waterfall prevention with the loaders and nested routing that react-router ported over from Remix. Without a lot of the headaches, shifting landscape, and vendor lock in of RSC.
@edwardalmanzar8382
@edwardalmanzar8382 Год назад
👍
@mukulr5171
@mukulr5171 Год назад
🔥🔥
@elgalas
@elgalas Год назад
Wonder if garbage collection is not affecting the serverless deployment? Rather as the request is sent, the environment is shutdown, whereas locally your server is persistent and the GC kicks in? Or there's less memory as you increase requests?
@AcidicSolvent
@AcidicSolvent Год назад
What theme and terminal do you use @jack.
@CaleMcCollough
@CaleMcCollough Год назад
I thought Server Side Rendering was more for SEO.
@amit10878
@amit10878 Год назад
Thank you so much for this video! (Using next for 2 years now and learned a lot from your videos) I hope Next will do some changes that will speed up the performance, the router is so slow its painful.
@riddixdan5572
@riddixdan5572 Год назад
I wonder, how does remix compare to nextjs server components?
@haack79
@haack79 Год назад
Great video! thank you, and yes Im always concerned about performance and costs, I have been switching my project to next13 and found I really enjoy the new file structure of it, they really are trying to prioritize developer experience in my opinion which is a great thing, but I would love to see what the cost analysis would be if significant at all. My issue with some deployment systems such as amplify and nextjs is the bad caching and certain requests not being recognized as already sent so get charged for each one as if they were a new request.
@jherr
@jherr Год назад
Have you tried cloudfront?
@haack79
@haack79 Год назад
@@jherr not yet ! that's definitely something to look into and think about. thanks.
@haack79
@haack79 Год назад
@@jherr also , have you tried millionjs yet ?
@jherr
@jherr Год назад
@@haack79 Early on, when the docs were not great. I'm going to take another look soon. The performance gains look good, but there is instability as well. So, I'm not sure I'd use it in production yet.
@haack79
@haack79 Год назад
@@jherr I saw your video you made on it ! So awesome ! Your material is so clear and accurate, thanks so much =)
@subnatiby
@subnatiby Год назад
me to
@gordonfreimann
@gordonfreimann Год назад
I believe that react never cared about performance. They only improve react with only the developer experience in mind same for nextjs. I come from a systems engineering background and i inherently care about performance for any kind of software and seeing these things makes me sad
@StingSting844
@StingSting844 Год назад
While true these frameworks are immensely popular for newer products where time to market is paramount
@gordonfreimann
@gordonfreimann Год назад
@@StingSting844 it can still be good on both :) good software shouldn’t mean it takes more time to market.
@pxkqd
@pxkqd Год назад
If you wanted performance you'd use Solidjs instead of React. If you're interested in developer experience though... same, better with Solidjs. So no reason to use React.
@oscarljimenez5717
@oscarljimenez5717 Год назад
the problem is that like Jack said, in real aplications behind a CDN or a cache database, all of these test don't matter.
@IvanRandomDude
@IvanRandomDude Год назад
It is sad that "developer experience" became a code word for slow and inefficient code in FE space.
@JLarky
@JLarky Год назад
Nice video to celebrate 10 years of React :-)
@raspaccio
@raspaccio Год назад
I have no insight on this whatsoever, but I think you were being throttled by Vercel, that's why you see a limit on your requests per second for the deployed version.
@jherr
@jherr Год назад
That was one of the thoughts I had, but the steady state difference in pages vs app router still puzzles me. If it were throttling it seems like they would be the same.
@xrr-1
@xrr-1 Год назад
Jack you should make frontend system design videos, you're perfect for that
@jherr
@jherr Год назад
I'm looking at doing some content on Figma Dev and also around design systems in the NextJS RSC era.
@sairaj5660
@sairaj5660 Год назад
Also what will the bandwidth costs gets like when using RSC?
@jherr
@jherr Год назад
Yes. The returned payloads are larger because of a larger JSON payload.
@MonisKhanIM
@MonisKhanIM Год назад
Excellent work
@deatho0ne587
@deatho0ne587 Год назад
The reason for testing on Vercel could be DNS caching, which is why you are seeing flat lining roughly. I have a rough guess if you tested it once every 20 mins for an three hours or so you would see the time drop from first to second then it would flatline again till the last, +/- some time due to connections.
@jherr
@jherr Год назад
Hmmm... DNS lookup locally? Should only be once. And this was a test with a lot of requests and I think that DNS value would be resolved on the first and then reused for the testing sesssion.
@deatho0ne587
@deatho0ne587 Год назад
I thought you put the JSON on a Vercel (AWS) server, so not so local. Then getting the data the first time would be a miss, then multiple hits back to back. If that is the test you did then it sort of makes sense, since the only one that might matter is the first call +/- some network time. If that was the case, maybe try for (someTime) { // call 100, wait for the response // call 200, wait for the response // call 300, wait for the response // ... // call 1500, wait for the response }
@adambickford8720
@adambickford8720 Год назад
The 'win' here is I can have a junior JS dev do 'full stack' work. If server performance actually mattered, shouldn't you be looking at something like Java anyway?
@jherr
@jherr Год назад
A junior JS dev can’t handle getServerSideProps?
@adambickford8720
@adambickford8720 Год назад
@@jherr Im saying the entire paradigm isn't about performance in the first place, it's about DX and simplicity. Does this additional performance hit really matter?
@jherr
@jherr Год назад
@@adambickford8720 That really depends on what you are building. If you are building a more static CMS experience, frontended by a CDN or statically generated, then probably not. But if you are building a high volume, highly user customized experience that is also SSR'ed (either completely or partially) where every request is going back to origin then, yeah, performance is a going to be an issue.
@IvanRandomDude
@IvanRandomDude Год назад
@@adambickford8720 It should. We need to be environment friendly. Having the mentality that it does not matter leads to millions of inefficient apps in the cloud that waste resources and contribute to the climate change and pollution.
@benarkimosh
@benarkimosh Год назад
Is it possible to use pages and app router at the same time. How can you use RSCs without the app routher?
@jherr
@jherr Год назад
You can use pages and app router at the same time, BUT, you RSCs are restricted to App Router routes.
@benarkimosh
@benarkimosh Год назад
@@jherr I would love to see an example of this. Please.
@KevinPeters
@KevinPeters Год назад
It will not matter as much in the applications I am building so right now I am just happy with the DX improvements that we get (specifically calling fully typed server actions in client components without an intermediate step). Another performance gain might be when considering partially shared layouts between routes. In the pages paradigm that’s very difficult to compose properly so often times between page navigations some chunks will be split although they could be shared.
@elamandeep
@elamandeep Год назад
React server component is same as php inside html. It will be slow as compared to APIs.
@gordonfreimann
@gordonfreimann Год назад
its worse than php actually.
@tdp-pop6810
@tdp-pop6810 Год назад
Wrong, PHP has no rehydration. To be faster or slower will depend on how much JS the client side needs to run before the page is interactive and how much processing the server needs to do.
@jherr
@jherr Год назад
So I wouldn't say he's wrong then, I would say that the two are hard to compare because the models are so different. That being said, there SHOULD be no hydration on the client because these tests are all RSCs, but there is hydration because NextJS doesn't optimize for the zero-JS case.
@Dev-Siri
@Dev-Siri Год назад
​@@jherr react needs to be shipped because of client side routing and other client work so zero js is not possible.
@jherr
@jherr Год назад
@@Dev-Siri In this case it actually would work since there literally is no client side interactivity on the page, but yeah, in any real app something is going to happen client side so you'll need React.
@NamanGoel34
@NamanGoel34 11 месяцев назад
Biggest problem with the tests: You don’t test the size of the JavaScript that is loaded with a script tag on the initial HTML page render. second biggest problem: RSC loads some strings in inline script tags after the actual HTML content has streamed. You’re counting this in your measurements but not counting the bigger scripts that pages would load in a separate request. Overall, you only tested the performance of the HTML request and didn’t account for other requests.
@NickServ
@NickServ Год назад
Hi Jack, is it intentional that you're comparing React stable (imported by pages router) to React canary (vendored with app router)? I'd expect pages router to naturally be faster because it's rendering with a more stable and mature version of React.
@haithem8906
@haithem8906 Год назад
It could be for a multitude of reasons One of which is that the app route is pretty new compared to pages. It should get better overtime
@Abdullah-yq7jp
@Abdullah-yq7jp Год назад
One other point you fail to mention is the he maturity if page vs app App has just been released while page has been around for years So improvements in perf for app, optimisations are to come
@undefindjs2419
@undefindjs2419 Год назад
For me this is a big thing. I cant convert one of my clients sites to app-router because all data is fetch with user-token. So cant cache it. but one thing i like to to test, is that i created a small app, like a admin-site, where you can list users, and go in to a user and update its say username. And the back to the list of users. With the new app-router i cant get it to update the new data. It updates one time, but if i go back to the user-page again, the old data will be here. The server is not fetching the new data from the database. I have tried all the difference cache methods you can do, and i will never get the server to fetch new data more then 1 time. When i change this app to pages-router all works great, and the server fetch new data on every pageload. Im not a big fan of app-router right now. Maby if you have a site that data is very staile and never changes it great. But for like a site that have live-update like a admin site, the app-router is just bad.
@pawel_890
@pawel_890 Год назад
The solution is simple, both servers are identically fast, the problem is the size of the HTML. Streaming increases the size of the HTML and this interferes gently with the transfer capabilities. Try measuring the size of the HTML.
@jherr
@jherr Год назад
I agree that the response payloads are bigger with RSCs because the response contains a frosted DOM, but we are seeing performance differences at sizes as small a single tag, or ten tags.
@pawel_890
@pawel_890 Год назад
Maybe benchmark tool is broken. Did you test other producion websites with oha?
@jherr
@jherr Год назад
@@pawel_890 I’ve tested a bunch of sites with different architectures with oha. I’m pretty confident in it.
@JokeryEU
@JokeryEU Год назад
always use cable connection than wireless, if you want stable and consistent results
@jherr
@jherr Год назад
Fair.
@andrewbateman-wd4le
@andrewbateman-wd4le Год назад
Interesting comparison, but perhaps not apples to apples. I would expect RSC to take slightly longer to process. The clue is in the name - the server is actually rendering React Components on the Server, which simply has to add some overhead. Where metrics for RSC will shine over client rendered components is in perceived performance, measured using things like Web Vitals. I would love to see what the difference in FCP would be for the examples given in your video. Also, as you said if you leveraging a CDN or another caching mechanism, the response time differences will be null and void for >99% of requests. Keep up the great work 😊
@jherr
@jherr Год назад
Both apps are 100% server rendered.
@andrewbateman-wd4le
@andrewbateman-wd4le Год назад
😮as you were then!
@andrewbateman-wd4le
@andrewbateman-wd4le Год назад
In my defence I watched the video at 6:00am my time. Tired brain = stupid comments!
@jherr
@jherr Год назад
@@andrewbateman-wd4le no worries at all. Been there done that.
@kiikoh
@kiikoh Год назад
The app router is rendering on the server, of course it will be slower. The speed savings would be shown in the browser. Getting delivered html vs js, time save on rendering and network transport time should be less
@Zaber123
@Zaber123 Год назад
The pages version is also rendered on the server. This is a comparison of two different react SSR implementations
@reginaldbellas703
@reginaldbellas703 Год назад
Hey Jack I respect you but like I said week ago React is dead either qwik or angular signals
@majorhumbert676
@majorhumbert676 Год назад
And next week it will be something different If people really cared, they'd be using Elm which solved all these performance issues ages ago
Далее
Did RSCs Really Turn React Into PHP?
18:48
Просмотров 35 тыс.
The React You Want Is 10X Slower
19:56
Просмотров 34 тыс.
🦊🎀
00:16
Просмотров 311 тыс.
3 React Mistakes, 1 App Killer
14:00
Просмотров 115 тыс.
The Problem with React Server Actions
9:45
Просмотров 20 тыс.
React + Servers = Confusion
20:30
Просмотров 42 тыс.
NextJS 12.1 SSR & SSG: Everything you need to know
31:18
Will React's New Cache Fix Its "Use" Hook?
19:41
Просмотров 48 тыс.
Goodbye, useEffect - David Khourshid
29:59
Просмотров 500 тыс.
Reflecting on React Server Components
26:33
Просмотров 45 тыс.