Open Tal500 opened 1 year ago
We can take it a step further and say that the data
will be a function returning a promise, meaning that if the shared load function doesn't call it, the request to the server load function wouldn't be called at all.
+1. This is also related to https://github.com/sveltejs/kit/discussions/7119#discussioncomment-3781744
I'm not sure this gives us any speed boost given the current semantics: We rerun the shared load
function only when it's deemed out of date. Part of that decision depends on the server load
response - if it's null
or type: 'skip'
and other things are also determined be to up to date, the shared load
function isn't called again. We cannot make this decision upfront if we wouldn't wait for the server data. If we wouldn't wait, this would mean potentially running the shared load
function only to discard its execution mid-way, resulting in needless work.
I don't follow how this is a blocker for #7213 or #7175 (or #7119)? If we do implement some kind of loading skeleton screen mechanism (whether that's in the +layout.svelte
as proposed in #7213, or a separate +loading.svelte
file) then it would be completely independent of load
functions.
We rerun the shared
load
function only when it's deemed out of date. Part of that decision depends on the serverload
response - if it'snull
ortype: 'skip'
and other things are also determined be to up to date, the sharedload
function isn't called again. We cannot make this decision upfront if we wouldn't wait for the server data.
The client does know which server load
functions are invalid, because the client is the one telling the server which load
functions are invalid (via the x-sveltekit-invalidated
header when it requests server data). The only wrinkle is that the server might unexpectedly decide to throw an error or redirect, in which case shared load
functions would be wasting work. That's arguably not the thing to optimise for though.
So the real question for me is more about ergonomic trade-offs than performance ones. Does needing to do this sort of thing...
export async function load({ data }) {
+ const foo = await load_some_stuff();
return {
- ...data,
- foo: await load_some_stuff()
+ ...(await data),
+ foo
};
}
...pay for the ability to prevent waterfalls in cases like this?
// src/routes/photo/[id]/+page.js
async function load({ data, params }) {
await preload_image(params.id);
return data;
}
I don't follow how this is a blocker for #7213 or #7175 (or #7119)?
This is a "blocker" using a hack I was hinting in the description:
You can bypass the promise unwrapping of data loading, simply by encapsulation of the promise on an object (i.e. {result: {promise: externalFetch()}}
) , and then the invocation of the load
function in this hack is non-blocking. Yes, I know, this is ugly but I thought it might be a good starting point for improving the API.
If we do implement some kind of loading skeleton screen mechanism (whether that's in the
+layout.svelte
as proposed in #7213, or a separate+loading.svelte
file) then it would be completely independent ofload
functions.
I'm trying to make a clear suggestion, so I won't go into details, but my further suggestion is similar to the +loading.svelte
file. I would like if we can drop out entirely the server-only loading function, and replace them by "resource providers"(you can think on them conceptually as "memoization of fetch requests" per session, meaning that if on the same session the same provider is called twice with the same argument, it will be fetched only once) that are not dependent on the pages routing structure.
This means that there will be left only shared loading functions, server-side "resource providers"(that are publicly available to be queried both in server and client), and the server handle
method that return the initial session value (which is passed to the shared loading function). Notice that such design cancels completely the waterfall issues. (There is another idea how to further store the result in the shared client/server side, but need to process it tough)
Will be glad to see if there was such a discussion.
So the real question for me is more about ergonomic trade-offs than performance ones. Does needing to do this sort of thing...
export async function load({ data }) { + const foo = await load_some_stuff(); return { - ...data, - foo: await load_some_stuff() + ...(await data), + foo }; }
...pay for the ability to prevent waterfalls in cases like this?
// src/routes/photo/[id]/+page.js async function load({ data, params }) { await preload_image(params.id); return data; }
There are cases that you want to query data both from your SvelteKit server and another server (your own other/main backend, or 3rd party service), so this induces waterfall.
I would like if we can drop out entirely the server-only loading function, and replace them by "resource providers"
That's... not going to happen 😀 . You're of course free to use something like telefunc inside your shared load
functions.
There are cases that you want to query data both from your SvelteKit server and another server (your own other/main backend, or 3rd party service), so this induces waterfall.
If you have a server-only load
function anyway, why not fetch
the external data there? That way everything gets batched into a single request.
I would like if we can drop out entirely the server-only loading function, and replace them by "resource providers"
That's... not going to happen 😀 . You're of course free to use something like telefunc inside your shared
load
functions.
Why strictly not? In my suggestion, when the load function is being computed on the server side, calling this "resource provider" is just calling to a async function(in the SvelteKit echo system). The idea is that also the client can call this. You can view these providers as a decomposition of the currently server-only loading function into (atomic?) parts.
Because that approach involves making multiple independent requests instead of having everything neatly batched up by the framework. The atomicity you describe is a bug, not a feature. Since these resource providers aren't linked to a request, they don't get access to the RequestEvent
that contains things like event.locals
(unless you run handle
for each resource provider, which would be very wasteful in the case where handle
does things like retrieve a user object from a session cookie). Getting rid of server-only load
functions would also be a very significant breaking change.
We discussed this among the maintainers just now and decided that the cases where the current behaviour introduces an unnecessary waterfall are rare enough in practice that the ergonomic tradeoff isn't worth it. In those rare cases it's possible to avoid the waterfall by not having a +page.server.js
alongside a +page.js
; the +page.js
load
function can (for example) request data from a +server.js
file. As such we've decided to take this off the 1.0 milestone
Describe the problem
If you have both a server load function and a shared load function, the latter can get the results of the former by reading the
data
part of the load event.The problem is that the shared load function (as well as the rendered page) have to wait for the server result of the server load function. This results with two problems:
Describe the proposed solution
Just change the value of the
data
parameter to be a promise resolving to the server results. (Yes! Such a simple solution!)This will also allow optimistic UI since the shared load function (if not in SSR) might decide to pass the promise of
data
directly to the page/layout component (while avoiding promise unwrapping of SvelteKit of course).Alternatives considered
Forcefully avoid using server load functions, and only use a shared load function for this relevant context.
Importance
would make my life easier
Additional Information
In some sense it's a "blocker" for the other optimistic UI issues requested in #7213 and #7175.