Here is a breakdown how the normalization workflow works:
User clicks a token tag, such as accessibility
addManualNormalizations API route is called (this also supports passing multiple tags)
Data is loaded for current survey, edition, entities, raw response, and normalized response
The new token(s) are added to the "raw" response
The new token(s) are added to the normalized response
Both the raw and normalized responses are returned to the client
Issues to solve
the function loads a lot of data, which is slow
this is partly because we reuse the regular normalization functions, which work by replacing the entire document instead of doing a targeted update, which requires loading the entire document first.
also explore if dropping the serverless architecture (which we don't need at all for surveyadmin) would make storing some things in memory rather than in Redis easier.
there is currently no caching mechanism on the client to use the return value from the fetch call and update the data presented to the user in the table.
because loading anywhere between 1k and 10k+ documents from Mongo is very slow, I'm caching the entire set of responses for a question in Redis. But that means even if we update the client cache, as soon as the user refreshes their tab the data will be loaded from Redis again and all changes will disappear until the cache is cleared. So I guess we can't use Redis here?
because the table can contain 500-1000 rows any state update that triggers all the child components to re-render can slow down the UI a lot.
Here is a breakdown how the normalization workflow works:
accessibility
addManualNormalizations
API route is called (this also supports passing multiple tags)Issues to solve