facebook / relay

Relay is a JavaScript framework for building data-driven React applications.
https://relay.dev
MIT License
18.42k stars 1.83k forks source link

Feature Request: Flatten fragments in requests #4840

Open dr3 opened 3 weeks ago

dr3 commented 3 weeks ago

What?

As part of the relay compiler generate "flattened" queries to use in the network request body

Request body example ```graphql query useAppleQuery( $requestBody: v1_AppleRequestBody ) { apple(requestBody: $requestBody) { __typename ... on Apple { id flavour ...OneFragment ...TwoFragment ...ThreeFragment } ... on Error { errors { code } } } } fragment OneFragment on Apple { id taste } fragment TwoFragment on Apple { id taste colour } fragment ThreeFragment on Apple { id colour } ``` becomes ```graphql query useAppleQuery( $requestBody: v1_AppleRequestBody ) { apple(requestBody: $requestBody) { __typename ... on Apple { id flavour taste colour } ... on Error { errors { code } } } } ```

Why?

The server response is identical, so given were already running a compiler, do a bit more computation at build time, to improve server performance (less input to parse) and reduce the amount of data the client needs to send to the server (when not using persisted queries)

For large applications (where theres hundreds of fragments) this could have huge performance improvements

captbaritone commented 3 weeks ago

Yes! This is a quite interesting optimization. We already have a transform which implements it for use elsewhere in our compiler. However, like any inlining optimization, it's a tradeoff. In the example you give, it's a pure win, but in many cases (where a single fragment with many fields is used in many different selections in a single query) it can result in massively bloating the size of the query text since those fields get materialized in every selection instead of being defined once and then referenced by many different selections.

There are likely places where this is a pure win (the fragment is only every spread once) or a good tradeoff, but some type of heuristics would be needed.

rbalicki2 commented 2 weeks ago

In my opinion, it would be nice to also have a "flatten all" option. It's useful, for example, to get a holistic sense of all the fields that are fetched in a query. I don't know of any online tools that will flatten for me — if those exist (and I only spent 5 seconds googling), that would satisfy me.

That being said, for persisted queries, it's possible that a flattened query will be more performant, if the backend is doing a lot of work per request when parsing the query. This describes what happens at Pinterest now. I haven't rigorously tested this, though, and have no idea if that's true for other backends.

EDIT: odd... I thought I was responding to a comment that @dr3 made about flattening fragments that appear once.

Malien commented 2 weeks ago

The biggest offender for me is those one-time-use fragments. Covering those would be a big win regardless.

De-duplication wins of reusing fragments gets diminishing returns if we are to use persisted queries, since we are not shipping the query text over the network for each request. Of course there is still an effect on parse-time spent for a larger query text, possibly more ram/storage required on the server to store the flattened representation. The effect on the server runtime is hard to measure, since it heavily depends on the implementation and the use-case (only relevant for enormously large de-normalized query text).

Ideally we wouldn't want to pessimize certain cases by inlining ALL of the fragments. An opt-in (opt-out?) of the agressive inlining would be nice to have for those instances.

I am convinced that one-time-use fragment inlining is a low-hanging fruit we can do to improve all cases. I can't see downsides to shipping it enabled by default.

An opt-in "inline all" option would be beneficial, and wouldn't require much thought. If the user knows they want it, let's do it. To be fair, if the user REALLY wants it, they can postprocess persisted query text after the fact

Following that it would be nice to have an heuristic based approach (maybe being more aggressive when persisted queries are enabled?). How would we come up with a decent heuristic isn't a trivial question. The implementation of this isn't even likely to be prioritized by the relay team.

I'd rather see at least the minimal version shipping, rather than all-or-nothing.

mjmahone commented 2 weeks ago

This is something we're currently experimenting with, but having tried it, I would not recommend doing inlining by default at compile time.

There's a few reasons here:

Now, I do think this optimization is worth pursuing, but I now suspect (but have no concrete evidence) it should be done as a post-persistence optimization. Basically: when you have a persisted document, have a secondary document (or pre-parsed document) stored that applies this (and other) optimizations. This also keeps you robust to changes: if the "optimized" document ever is still in use, but no longer valid, you can re-run the optimization.

Knowing not to add this complexity is something that's hard to realize without having tried in the first place, but hopefully we can prevent a bit of wasted duplicative effort. If you still think this optimization would be useful for you, I'd encourage you to apply it for yourself: you can do so by writing your own inline + merge transform on the document IR that Relay's compiler produces, using Relay's IR transform primitives. It would be interesting to compare notes with other attempts!

Malien commented 2 weeks ago

in the future the parent field may change to return an interface instead of a concrete type

That’s a breaking change to the schema anyway. Not sure that if it happens to not break in certain circumstances (stars have to align) we should leave those cases unaffected. And by stars aligning I mean the way you organize components and their data requirements; the way you handle {} being returned from a field, you had good reasons to assume would return either an object with requested properties or null (if the type was nullable, which it is likely to be). {} is a distinctly different case.

It breaks tooling

I'm honestly not aware of such tooling and it's breakage behaviour. I'd have to check those tools out.

I'd say that correlating error -> field -> place in code where it's requested isn't that hard, but that's subjective and is only relevant to my experience.

I now suspect (but have no concrete evidence) it should be done as a post-persistence optimization

I'd argue that unpersisted queries are to benefit even more from careful (whatever that means) inlining, as it will shorten the query text sent with the request. Un-persisted queries are likely to be parsed/analyzed by the server on every request. That is in contrast with persisted queries, execution plan of which is likely to be cached (since all of the query permutations are pre-determined and are know beforehand).