vikejs / vike

🔨 Flexible, lean, community-driven, dependable, fast Vite-based frontend framework.
https://vike.dev
MIT License
4.35k stars 348 forks source link

Server-side code-splitting for edge platforms #433

Closed brillout closed 1 year ago

brillout commented 2 years ago

Description

As a user deploying to an edge platform, I want to be able to split the server production code by having one server bundle per page.

That would:

Alternatively, the user can extend Cloudflare's 1MB limit.

Related Discord conversion.

AaronBeaudoin commented 2 years ago

A few specifics that would be necessary for this:

  1. The renderPage function call in production would need some way to specify the bundle to use. So it would need to be something like renderPage("bundle-name-or-id", { ... }) for example. Maybe there's a way to abstract this away, but I think even then there might be cases when manually specifying the bundle might still be necessary.
  2. While actually creating all the functions themselves is still a user-land concern, a manifest is needed to help. For example, with Vercel, I'll need to create route rules for the parameterized pages, and a manifest gives me a way to automate that step easily. The manifest would have (preferably) a regular expression for the routes that should be handled by the page, and a path/unique id to the page/function. That would make it pretty simple for me the generate the routes I need. Here's an example for reference:

For the following set of pages:

You might get a manifest like this:

[
  {
    "path": "/product/special/info",
    "pattern": "/product/special/info"
  },
  {
    "path": "/product/special",
    "pattern": "/product/special"
  },
  {
    "path": "/product/@id",
    "pattern": "/product/.+"
  }
]

This is assuming your pattern doesn't need to consider query parameters, and that entries higher in the list take precedence over lower ones. In other words, sort order matters here.


Finally, I still think it would be useful to allow manually specifying a name for a bundle along with a "route string" to match routes that should be handled by that particular bundle. The reason why I think this is still valuable is because there is very likely a set of users that have too many pages to put into a single server bundle and stay under 1 MB, but also aren't ready to roll out their own script to generate all the necessary platform specific routing/function generation code to deploy one function per page. Their primary concern is really just to split their own function into 2, or a few, still coded by hand.

brillout commented 2 years ago

About depoy URL patterns, I believe vite-plugin-vercel actually already does this (to be able to deploy some vps pages with ISR while others with SSR).

As for manual splitting, the way I see it is that it's unnecessary if we can provide optimal server code-splitting (i.e. one bundle per page) while the stitching is done by some other tool such as HatTip. But, if automatic server code-splitting doesn't work out to be optimal, then yes manually chunking should be a consideration then. Let's see.

brillout commented 2 years ago

Also, a neat thing about Server Code Splitting is that it enables deploying pages to multiple providers. E.g. static pages to GitHub Pages, marketing pages to the Edge, and admin pages to Serverless.

AaronBeaudoin commented 2 years ago

Just a note regarding this: It's a lot more beneficial for some platforms than it is for others. On Vercel Edge Functions, 1 MB is a hard limit that there doesn't seem to be any way to increase. On Cloudflare Workers, not only can the limit be increased, but I've discovered that the whole platform seems to be designed in such a way as to discourage splitting each page into its own worker. You could conceivable get away with it, but you'd be doing some crazy Wrangler gymnastics with each deploy.

brillout commented 2 years ago

On Cloudflare Workers, not only can the limit be increased, but I've discovered that the whole platform seems to be designed in such a way as to discourage splitting each page into its own worker. You could conceivable get away with it, but you'd be doing some crazy Wrangler gymnastics with each deploy.

That's good to know. Why is that?

AaronBeaudoin commented 2 years ago

Well, you'd have to run wrangler publish <entry-for-page> --name <some-unique-name> for every page. You can only specify one entry in wrangler.toml of course, so if you want to publish a bunch of workers, one for each page, you'll have to come up with some build process to create all those worker entries. Then, once you've deployed them, you're going to get a huge list of workers in the Cloudflare dashboard, and since each worker is sort of treated as a full project, you've got to do that annoying thing where you type the full name of the worker if you ever try to delete it. Kind of nasty.

This feature is still useful, but definitely should be opt-in.

brillout commented 2 years ago

@AaronBeaudoin Yea I agree it's too much of a hassle for users. But I think it's an option if there is some kind of build script orchestrating it all for the user.

For example, Vercel Edge is using Cloudflare Workers, so it shows it's possible.

brillout commented 1 year ago

Closing for lack of intereset.

Also, Cloudflare is continuously increasing its worker limit, so this becomes less and less a need.

brillout commented 1 year ago

That said, it can still be relevant for decreasing cold starts. Let's see.