Closed Weiko closed 3 days ago
Expose MUTATION_MAXIMUM_RECORD_AFFECTED
in getClientConfig
endpoint:
api/src/config/config.service.ts
MUTATION_MAXIMUM_RECORD_AFFECTED
to the configuration object returned by getClientConfig
.
getClientConfig() {
return {
...
MUTATION_MAXIMUM_RECORD_AFFECTED: process.env.MUTATION_MAXIMUM_RECORD_AFFECTED || 60,
};
}
Modify frontend to handle batching of mutations:
frontend/src/hooks/useMutations.ts
MUTATION_MAXIMUM_RECORD_AFFECTED
.
const batchMutations = async (records) => {
const limit = config.MUTATION_MAXIMUM_RECORD_AFFECTED;
for (let i = 0; i < records.length; i += limit) {
const batch = records.slice(i, i + limit);
await mutateBatch(batch); // Assume mutateBatch is a function that handles the mutation
}
};
Handle UI updates for batching and async jobs:
frontend/src/components/MutationProgress.tsx
const MutationProgress = ({ total, completed }) => (
<ProgressBar now={(completed / total) * 100} />
);
MutationProgress
component in the relevant UI parts where mutations are triggered.Handle async job initiation for intensive mutations:
frontend/src/hooks/useMutations.ts
const handleIntensiveMutation = async (records) => {
if (records.length > INTENSIVE_MUTATION_THRESHOLD) {
await initiateAsyncJob(records); // Assume initiateAsyncJob is a function that handles async job initiation
} else {
await batchMutations(records);
}
};
See https://github.com/twentyhq/twenty/issues/6023 as a "temporary" solution.
Is #6039 fixing this issue? If not, could you add some tags, @Weiko? :)
Only part of it, this should go with #5169
@lucasbordeau is working on it
Scope & Context
Currently we have a limitation on the API - MUTATION_MAXIMUM_RECORD_AFFECTED, defined as an env variable - which specifies the number of records that can be affected during a GraphQL mutation (most specifically update and deletions mutations, creations are not concerned for some reason). This limitation comes from pg_graphql limitation, see "atMost" in the documentation here https://supabase.github.io/pg_graphql/api. If it's not specified, it falls back to the default value (60). This is also a good practice to avoid overloading the server and in this case the DB. This also tells the user "We handle this much so you should expect an acceptable timeframe for your request, more of it might result in degraded performances/timeout."
Current behavior
For example, when we try to delete more than {MUTATION_MAXIMUM_RECORD_AFFECTED} records, we get an error, which is fine from the API point of view but the Frontend should handle this properly, this is even more an issue UX wise when people want to check the box at the top of the table which implies that it should select all records and mutation affects all of them (even those that are not loaded yet).
Expected behavior
We still want to keep a limitation on the API but we should handle intensive mutations on the client with some strategies:
Technical inputs