Open MarkKoester opened 1 year ago
Did you ever resolve this? We are just stumbling on this as well
Sorry I missed this in the past, I created a pull request that should circumvent this issue by rather than recursing for each level of discovered fragments to using an iterator https://github.com/graphql/graphql-js/pull/4116
@Crizzooo @MarkKoester in what circumstances are you hitting a depth of 10.000, would love to have more information on that.
So in our case, we heavily use client preset and client side fragments. This led to the generated file being massive to the point where importing from ./gql
was effectively OOMing the simulator hermes environment for React Native
To solve it, we now generate code into a ./base
folder.
In there, we run a script that moves all the constants to its own file, and all the types to their own file.
We then copy the .gql
file and replace its import line with:
import * as types from './base/consts.generated';
In our .graphql
, we have this:
/**
* This is not a generated file.
* This is a hardcoded file mimicking what the generated/base/index does, but using our split files instead to export all consts and types.
*/
/* eslint-disable */
export * from './base/consts.generated';
/* @ts-expect-error - complaining about reexport */
export * from './base/types.generated';
This effectively fixes our simulators again, but took manual scripting over graphql-codegen outputs.
if curious, here is the default graphql file created by codegen
LMK if theres any other info I can provide @JoviDeCroock
Hmm, that would not be related to this issue though as the issue here is a stack depth limit that would happen on the server that validates/executes your request. The issue you are faced with seems related to types/parsing, no?
Correct, @JoviDeCroock . Originally I commented as it looked like a similar cause (React Native was displaying similar max call stack error)
Then I learned more about exactly what was breaking (reading the types, not generating them)
So just sharing more on our issue, but no, its not exactly the same as the OP here.
@JoviDeCroock I don’t remember why exactly but I was doing some tests to see if a ton of lots of fragments in a query would be handled gracefully. The example was contrived but it seemed like an edge case that would be worth accounting for
I noticed during some testing that sending queries with 10000 fragments causes a
RangeError: Maximum call stack size exceeded in OverlappingFieldsCanBeMergedRule
error to be thrown fromOverlappingFieldsCanBeMergedRule
. I was wondering if this is the intended behavior and if not, how it should be handled.This was observed on version 16.6.0 and 16.7.2
Below is the stack trace of the error and the steps to reproduce
Error
Reproduction steps