Open smoya opened 5 months ago
https://asyncapi-studio-studio-next.vercel.app is built with output: 'standalone'
.
Please check if it has the same behavior.
This build includes f5676ec feat: add og tags to studio-next (#1106)
.
Iteration | Time taken (milliseconds) |
---|---|
1st | 569.98 |
2nd | 49.12 |
3rd | 202.69 |
4th | 37.65 |
5th | 160.47 |
Yes, in your case @aeworxet the time taken is much less for even the first hit. The output: standalone
is what makes the difference I guess. Also the URL used is same as mentioned here: https://github.com/asyncapi/studio/issues/224#issuecomment-2160148396
@aeworxet's instance is deployed on Vercel, when I try adding the same optimization he mentioned in Netlify the response time reduces significantly for me as well. Thanks @aeworxet
My instance can be found here: https://studio-helios2002.netlify.app
@aeworxet's instance is deployed on Vercel, when I try adding the same optimization he mentioned in Netlify the response time reduces significantly for me as well. Thanks @aeworxet
That doesn't seem to happen. Those are the times when requesting your site:
---
time_lookup: 0.013921
time_connect: 0.040744
time_appconnect: 0.070180
time_pretransfer: 0.070328
time_redirect: 0.000000
time_starttransfer: 3.804182
———
time_total: 3.831893
The diff in terms of the response between your app and @aeworxet is the fact the second is always cached: "x-vercel-cache":["HIT"]
I guess we should do some local testing first rather than relying on CDN.
Are those the times to load the entire page? By less time I meant to fetch the initial HTML that is server-side rendered and contains the meta tags.
Because these are times I see on my side: | Iteration | Time (milliseconds) |
---|---|---|
1st | 867.38 | |
2nd | 105.30 | |
3rd | 84.85 | |
4th | 84.92 | |
5th | 113.89 |
Are those the times to load the entire page? By less time I meant to fetch the initial HTML that is server-side rendered and contains the meta tags.
What I shared is the time of a curl
request made against your site. The time_total
is the time of the whole curl execution. the time_starttransfer
is the time it passed until the server started serving content.
Because these are times I see on my side:
Aren't you hitting cache?
Because these are times I see on my side:
Iteration Time (milliseconds) 1st 867.38 2nd 105.30 3rd 84.85 4th 84.92 5th 113.89
These are the times I got for a fresh document itself, so I don't think I am hitting the cache and also the time for the first response is much higher than the next 4.
What do you mean by first response? The first response after a deployment?
Nope, I mean that I send 5 requests continuously to the site, and the time it takes to obtain the meta-tags
for each time is what I am calling a response.
Ok, meta-tags. Interesting. I'm just doing a curl. Not sure what the client you use is doing. Anyway, the issue is still there.
For the record, sharing two responses and their headers. As you can see there is no difference, not even in cache headers.
The first request was made this early morning, the second right after the first:
response_code: 200
headers: {"age":["35139"],
"cache-control":["public,max-age=0,must-revalidate"],
"cache-status":["\"Next.js\"; hit","\"Netlify Edge\"; fwd=miss"],
"content-type":["text/html; charset=utf-8"],
"date":["Fri, 14 Jun 2024 05:09:43 GMT"],
"etag":["\"3wn78und5ueoa\""],
"netlify-vary":["header=x-nextjs-data|x-next-debug-logging|RSC|Next-Router-State-Tree|Next-Router-Prefetch|Accept-Encoding,cookie=__prerender_bypass|__next_preview_data"],
"server":["Netlify"],
"strict-transport-security":["max-age=31536000; includeSubDomains; preload"],
"vary":["RSC,Next-Router-State-Tree,Next-Router-Prefetch,Accept-Encoding"],
"x-content-type-options":["nosniff"],
"x-nextjs-date":["Fri, 14 Jun 2024 05:09:43 GMT"],
"x-nf-request-id":["01J0AJDH1HTFWZK1YF8AAYZKSB"],
"x-powered-by":["Next.js"],
"content-length":["19062"]
}
---
time_lookup: 0.015968
time_connect: 0.045507
time_appconnect: 0.076425
time_pretransfer: 0.076563
time_redirect: 0.000000
time_starttransfer: 3.505065
———
time_total: 3.532842
response_code: 200
headers: {"age":["35548"],
"cache-control":["public,max-age=0,must-revalidate"],
"cache-status":["\"Next.js\"; hit","\"Netlify Edge\"; fwd=miss"],
"content-type":["text/html; charset=utf-8"],
"date":["Fri, 14 Jun 2024 05:16:34 GMT"],
"etag":["\"3wn78und5ueoa\""],
"netlify-vary":["header=x-nextjs-data|x-next-debug-logging|RSC|Next-Router-State-Tree|Next-Router-Prefetch|Accept-Encoding,cookie=__prerender_bypass|__next_preview_data"],
"server":["Netlify"],
"strict-transport-security":["max-age=31536000; includeSubDomains; preload"],
"vary":["RSC,Next-Router-State-Tree,Next-Router-Prefetch,Accept-Encoding"],
"x-content-type-options":["nosniff"],
"x-nextjs-date":["Fri, 14 Jun 2024 05:16:34 GMT"],
"x-nf-request-id":["01J0AJT4HJ7V04BBH4Z48QQP9J"],
"x-powered-by":["Next.js"],
"content-length":["19062"]
}
---
time_lookup: 0.014502
time_connect: 0.041648
time_appconnect: 0.071193
time_pretransfer: 0.071333
time_redirect: 0.000000
time_starttransfer: 0.530230
———
time_total: 0.558090
Here is the diff:
2c2
< headers: {"age":["35139"],
---
> headers: {"age":["35548"],
6c6
< "date":["Fri, 14 Jun 2024 05:09:43 GMT"],
---
> "date":["Fri, 14 Jun 2024 05:16:34 GMT"],
13,14c13,14
< "x-nextjs-date":["Fri, 14 Jun 2024 05:09:43 GMT"],
< "x-nf-request-id":["01J0AJDH1HTFWZK1YF8AAYZKSB"],
---
> "x-nextjs-date":["Fri, 14 Jun 2024 05:16:34 GMT"],
> "x-nf-request-id":["01J0AJT4HJ7V04BBH4Z48QQP9J"],
19,22c19,22
< time_lookup: 0.015968
< time_connect: 0.045507
< time_appconnect: 0.076425
< time_pretransfer: 0.076563
---
> time_lookup: 0.014502
> time_connect: 0.041648
> time_appconnect: 0.071193
> time_pretransfer: 0.071333
24c24
< time_starttransfer: 3.505065
---
> time_starttransfer: 0.530230
26c26
< time_total: 3.532842
---
> time_total: 0.558090
Even though in my case as well the entire document does need to be fetched this is the script I used for anyone wanting to try it out: https://gist.github.com/helios2003/2fdb65377a8b1580b91464cbc7a1d974
The theory that is gradually becoming more solid in my head is that we are always SSR. And that makes sense, because afaik NextJS is set up by default to SSR (even for static pages), then rely in cache. Since SSR in Netlify happens in Netlify functions (serverless), the first request requires a function cold start, which takes a lot. The rest, are either cached by the Edge or we find the serverless function to be warmed up.
Post backing up someho my theory: https://answers.netlify.com/t/slow-initial-load-time-on-ssg-with-nextjs/46384/3
The theory that is gradually becoming more solid in my head is that we are always SSR. And that makes sense, because afaik NextJS is set up by default to SSR (even for static pages), then rely in cache. Since SSR in Netlify happens in Netlify functions (serverless), the first request requires a function cold start, which takes a lot. The rest, are either cached by the Edge or we find the serverless function to be warmed up.
Post backing up someho my theory: https://answers.netlify.com/t/slow-initial-load-time-on-ssg-with-nextjs/46384/3
In order to validate this theory, I believe measuring the time since the request hits NextJS router until it serves the response should tell us the time spent on processing the request. The rest, would be time of spinning up such a serverless function. Note that my NextJS knowledge is close to zero and I'm just assuming the SSR happens before hitting NextJS router. If it's not the case, we should test accordingly.
Additionally, can we check if we are using NextJS Runtime at Netlify? I have no permission to see build logs at https://app.netlify.com/sites/studio-next 🤷 .
Build logs should show something like
EDIT: Can you @helios2003 confirm https://studio-helios2002.netlify.app has the Netlify NextJS runtime enabled? So we can discard this as possible solution.
@smoya Yes, the NextJS runtime is enabled in https://studio-helios2002.netlify.app/.
A difference that I notice in Netlify's version and Vercel's version is the build cache. The attached image shows the build logs of the Vercel deployment.
I believe Studio maintainers can verify if this cache is being generated on the Netlify deployment. Because from what I observe, Netlify creates a NextJS cache but not a build cache.
@smoya I did some testing and I think your theory is right.
I don't think there is any Static Pages
in nextjs now.
there is always a runtime involved and by Static
they mean we are going to cache this page for you and anytime we receive a request we are going to serve the page. not that We are going to generate a separate HTML page for it and it is servable with a CDN
Which means a Runtime is ALWAYS THERE on the server to decide what to serve, what parts to cache.
We will encounter cold starts and some extra time for backend to see if it should serve a cache or no.
BTW, we need this right? we need some part of the page to be rendered on the server so we can add the OpenGraph metadata?
@KhudaDad414, can you tell why we aren't caching certain components during the build time itself and following Static Site Generation (SSG). Also, in the previous comment do you mean that in the first request the page is entirely rendered on the server? Ref: https://nextjs.org/docs/pages/building-your-application/rendering/static-site-generation Also is there a build cache being uploaded in the production deployment?
The theory that is gradually becoming more solid in my head is that we are always SSR. And that makes sense, because afaik NextJS is set up by default to SSR (even for static pages), then rely in cache. Since SSR in Netlify happens in Netlify functions (serverless), the first request requires a function cold start, which takes a lot. The rest, are either cached by the Edge or we find the serverless function to be warmed up.
Post backing up someho my theory: https://answers.netlify.com/t/slow-initial-load-time-on-ssg-with-nextjs/46384/3
Hi @smoya, I'm new to the AsynApi community, but I disagree with some of your points here. Netlify Edge had used deno deploy which relies on V8 isolates and V8 isolates have been known for a fast start even when a cold start happens, it's different from what we see in virtual machines. My assumption here is probably related to some CSS package being downloaded during the initial startup, I will try to set the bundle analyzer and see what happens there
Reference: https://news.ycombinator.com/item?id=31912582 https://www.netlify.com/blog/deep-dive-into-netlify-edge-functions/ https://deno.com/blog/anatomy-isolate-cloud
@helios2003
can you tell why we aren't caching certain components during the build time itself and following Static Site Generation (SSG).
The whole page is statically generated and cached currently. not just some components. we would be able to make the page generation dynamic in the future because we have to generate OpenGraph metadata at some point.
First Scenario: No cache at CDN level (Netlify Edge) and had to cold start the server and get the Next.js cache.
Second Scenario: cache at CDN level (Netlify Edge)
Second Scenario: No cache at CDN(Netlify Edge) level but cache at Next.js.
@jerensl I don't think the problem is with downloading some CSS. As you can see in the above examples the wait time increases in the Server Processing
stage which is before any download begins.
1) When we miss the CDN(Netlify Edge) cache, the lambda function(or whatever Netlify uses) needs a cold start, that's why we are getting the 4sec time in the first request. there is nothing we can do here as far as I know. 🤷
2) CDN(Netlify Edge) cache misses randomly: the default behaviour should work fine. but for some reason, it doesn't.
Based on some tests that I have done on my fork, hosted here we can resolve this issue by having custom cache options in the response header:
# Netlify CDN should keep the cache for 100 days.
CDN-Cache-Control: public, max-age=3640000, must-revalidate
# Other Layers (including browser) shouldn't do any caching.
Cache-Control: public, max-age=0, must-revalidate
After we add those headers the cache-status
will be:
"Netlify Edge"; fwd=stale
: caching in progress in the current node in CDN. so they have to hit the Next.js.
"Netlify Edge"; hit
: content is being served from CDN.
@KhudaDad414 @jerensl I don't think the problem is with downloading some CSS. As you can see in the above examples the wait time increases in the
Server Processing
stage which is before any download begins.
Yeah, you are right there something related to Server Processing
stage
The problems
- When we miss the CDN(Netlify Edge) cache, the lambda function(or whatever Netlify uses) needs a cold start, that's why we are getting the 4sec time in the first request. there is nothing we can do here as far as I know. 🤷
But I think we can do something with 4sec coldown. Let me explain, after checking using bundle analyzer, I found monaco-editor
have been use both in the client side and nodejs(server side)
And I'm running another test on this website https://studio-helios2002.netlify.app/, and found there are long running task in main thread related to monaco which identical to the cold start in nodejs(see the red arrow)
And then I'm checking the code where monaco being declared using web worker
Why there are no web worker tasks running here?
Conclusion
After making some contributions on Modelina, I realized they did it so well with monaco-editor
, shout out to the maintainers there, they did a great job
Ideally, Monaco Editor should be run in the worker and not block the main thread as in the image above, so the user gets their content first and does not block the render.
One thing I realized between Modelina and Studio is:
Let's check Theo's video here, he explains so well why App Router will mostly get a cold start and offers a solution to that problem https://www.youtube.com/watch?v=zsa9Ey9INEg&t=643s
Solution:
@jerensl
Running the monaco editor using Web worker is not working here but what happened is it's running on the main thread which blocks around 4-6 seconds
main thread of client or server? if you mean on server the page is static and is only being built once (at build time). if you mean on the client then why it always doesn't have that 4 sec waiting time? and is on par with https://studio.asyncapi.com/ which is a normal CRA.
Ideally, Monaco Editor should be run in the worker and not block the main thread as in the image above, so the user gets their content first and does not block the render.
It does (at least the two workers that are supposed to. , main worker and yaml worker)
Migrate to Pages router, I think this is the most visible solution I can think of because most of the Monaco Editor we want them to run on the client side and we can isolate them not using global.window from the server
can you point out what feature do we need from pages
directory that isn't accessible in app
directory? and how would we isolate them not using global.window from the server
? considering the page is static?
@KhudaDad414
main thread of client or server? if you mean on server the page is static and is only being built once (at build time). if you mean on the client then why it always doesn't have that 4 sec waiting time? and is on par with https://studio.asyncapi.com/ which is a normal CRA.
can you point out what feature do we need from
pages
directory that isn't accessible inapp
directory? and how would weisolate them not using global.window from the server
? considering the page is static?
I think the concept you mention is more related to the Pages router which is an old way of using NextJS without React Server components(RSC), the way you all implemented here is by using the App router which is built on top of React Server Components(RSC) by default. I also see you are using a Client Component, I think there are misconceptions about what it is supposed to do, what I know it's the Client Component will render both the server and client and do some technique called hydration to the Client Side by injecting some functionality, also see here how Dan Abramov explain about RSC in simple way
In my opinion, NextJS App Router and Pages Router are two different types of framework, as the discussion here I think it's not be considered as different types of architecture and on to do list(React Server Components), App Router is more similar to Remix than Create React App.
I also not find any decision around why you came up with the idea to use React Server Components here https://github.com/asyncapi/studio/blob/master/doc/adr/0007-use-nextjs.md
Based on how it works in React Server Components, it's not surprising why we got 4 second cold start, because component render both happened in Client and Server
Consider how huge the changes to use React Server Components, which makes us rethink how we are supposed to deal with server and client at the same time, which also makes some state managers think again about how their supposed to deal with it https://github.com/pmndrs/zustand/discussions/2200
It does (at least the two workers that are supposed to. , main worker and yaml worker)
If the get fix, it's good then, btw I running test on the website mention above which is https://studio-helios2002.netlify.app/
Thanks for the explanation @jerensl.
I think the concept you mention is more related to the Pages router which is an old way of using NextJS without React Server components(RSC)
by static
I meant that the /
route is statically rendered and by extension Full Route Cached.
Based on how it works in React Server Components, it's not surprising why we got 4 second cold start, because component render both happened in Client and Server
This would be valid if we had a Dynamically Rendered page. Since the Page is Statically Rendered and is Full Route Cached, the server side components won't render with requests but is rendered at build time and is served to the client as React Server Component Payload.
Are you suggesting that a cold start invalidates cache and the page is rendered on the server again?
@KhudaDad414
by
static
I meant that the/
route is statically rendered and by extension Full Route Cached.
'/' route I basically a server component that statically renders by default but it's work very different with client component which need a server rendering
This would be valid if we had a Dynamically Rendered page. Since the Page is Statically Rendered and is Full Route Cached, the server side components won't render with requests but is rendered at build time and is served to the client as React Server Component Payload.
But we still have components/StudioWrapper.tsx right? because of that the page '/' route which a static rendering before becomes dynamically rendering
Are you suggesting that a cold start invalidates cache and the page is rendered on the server again?
No, but I'm suggesting trying to experiment with partial prerendering but keep in mind it is still an experiment feature, basically it will render statically render without waiting for dynamic rendering
Reference: https://www.youtube.com/watch?v=MTcPrTIBkpA
@jerensl
But we still have components/StudioWrapper.tsx right? because of that the page '/' route which a static rendering before becomes dynamically rendering
Yes it does. but only at build time. No server side code runs for statically generated routes. doesn't matter if there is a "use client" directive or no. Tested here
Can you give an example that a pre-rendered as static route renders on server (other than build time of course)?
the page '/' route which a static rendering before becomes dynamically rendering
Sorry, I don't understand, how does a route with static rendering "becomes" dynamic? can you explain it a bit more?
@KhudaDad414
Yes it does. but only at build time. No server side code runs for statically generated routes. doesn't matter if there is a "use client" directive or no. Tested here
I think I got it wrong here but sure statically generated will run during build time to generate HTML except for client components during initial load without lazy loading SSR
Can you give an example that a pre-rendered as static route renders on server (other than build time of course)? Sorry, I don't understand, how does a route with static rendering "becomes" dynamic? can you explain it a bit more?
It can happen under strict rules but not in our case, for example, if we have a cookie or turn no cache on fetch API
Do you know what I'm missing here? Did NextJS statically generate JSX to HTML component, why majority of the website still on JSX and just the navbar which got only statically generated on built time? Did they mean skipping SSR meaning no Staticly Generated HTML during built time too? Is it the point we use NextJS because they making into the HTML
Do you know what I'm missing here?
Full route cache (Statically generated route, if we can call it that) will only take effect when you are not opting out of it.
why majority of the website still on JSX and just the navbar which got only statically generated on built time?
Well, since we are using Monaco and it can't be rendered on the server, plus the other two components (Navigation and Preview) are dependent on the state has to be generated on the client.
the other two (Sidebar and the toolbar at the top) I am not sure if we can render them on the server. It may be possible.
Did they mean skipping SSR meaning no Staticly Generated HTML during built time too?
It means do not try to load this at the Server side since it relies on the window
object and will fail.
Some questions that I completely don't know the answer to and we need to answer to decide how we are going to structure the Application.
Which are out of the scope of this issue and needs to be discussed separately.
- what can be rendered on the server and what should render on the client?
RSC Client Component is already smart enough to separate what belongs to the client and what is on the server via hydration mechanism. Before RSC we need to use useEffect
as side effect for detecting window/browser API but with RSC we don't need that anymore
- How are we going to manage the state, should we keep the current approach?
Just so you know, before server components existed, react-query had a solution to managing this complexity of state between server and client as their claim as an asynchronous state.
The implementation of RSC in react-query seems a bit complex and reminds me of why we are moving out from redux in the first place, and they are still figure it out how they will do it in the future https://tanstack.com/query/latest/docs/framework/react/guides/advanced-ssr. They also write a blog about trade-off to make on network waterfall https://tanstack.com/query/latest/docs/framework/react/guides/request-waterfalls. This network waterfall also make why remix is better then NextJS + RSC https://remix.run/blog/react-server-components#obsessed-with-ux
Also, let's talk about network waterfall, seems like it has been a hot topic between NextJS + RSC vs Remix, the way NextJS did it by rewriting the fetch standard on the server. This solution is supposed to fix deduplication of the same fetch request and introduce it as the default caching behavior in NextJS as we have seen now but other frameworks like Remix insist it should not rewrite in the web standard and let the developer control their own caching behavior, this one also led to controversy and made the React Team remove fetch deduplication from RSC, let's see if NextJS will follow it or no https://www.youtube.com/watch?v=AKNH7mXciEM&t=920s.
It's supposed to be a good answer but I'm also don't know yet, because we are at an awkward spot now as web developers.
This issue has been automatically marked as stale because it has not had recent activity :sleeping:
It will be closed in 120 days if no further activity occurs. To unstale this issue, add a comment with a detailed explanation.
There can be many reasons why some specific issue has no activity. The most probable cause is lack of time, not lack of interest. AsyncAPI Initiative is a Linux Foundation project not owned by a single for-profit company. It is a community-driven initiative ruled under open governance model.
Let us figure out together how to push this issue forward. Connect with us through one of many communication channels we established here.
Thank you for your patience :heart:
Description
I'm working with @helios2003 in https://github.com/asyncapi/studio/issues/224. In particular, I'm its mentor through GSoC2024. As part of such project, we evaluated the possibility to run the AsyncAPI Parser-JS and parse AsyncaPI documents loaded via the
base64
query param. @helios2003 made a test measuring response time with and without the addition of the parsing (which would made the page from being statical to ssr). Our surprise came when we realized the NextJS version hosted in https://github.com/asyncapi/studio/issues/224 took ~4 seconds to just serve the HTML for the Studio page (without even loading any doc on it!), just plain/
.Today I decided to run another test, and confirmed the findings. However, a weird caching mechanism is happening.
Let me share with you 3 consecutive requests I made and the results:
[Next.js; hit, Netlify Edge; fwd=miss]
[Netlify Edge; hit]
[Next.js; hit, Netlify Edge; fwd=miss]
My assumption and understanding is that, for some reason:
The point is that I don't expect the first call to take 4 seconds but just as the third request since a request made to the root page should always give the same response (static). Besides that, no clue why the
cache-status
header in the 1st request saying the content was served as cached from NextJS.cc @Amzani @helios2003 @KhudaDad414