The mapping from role ids to role names is a huge JSON file generated at build time. It weighs about 1MB (about half when gzipped) which is more than the entire web app, and really slows down loading.
One alternative is to load it asynchronously, but at least a naive implementation of this resulted in flashing role ids before actually being decoded to names.
There are many ways in which this can be optimized.
We currently generate the dictionary by taking base names and generating variations of the name (e.g. from "whitelist" we generate "WHITELIST", "WHITELIST_ROLE", "ROLE:WHITELIST", and so on) and also variations of the identifier scheme (using the name bytes directly, or hashing them, or ABI-encoding then hashing). This results in lots of ids that are not necessarily in use and the size of the dictionary grows exponentially(?) for each new variation. We could reduce the approach to only generate those ids that are actually in use and this will probably drastically reduce the size of the dictionary.
A second question is if we want to send a pregenerated dictionary or send the role names and hash them on the user's computer. This will reduce the size of what we send over the network by not sending the hashes (which because of their randomness don't even compress significantly), but will require computation on the user's side. This would need to be done off the main thread so it introduces some complexity for dealing with Web Workers and properly caching the result.
If we send the prebuilt dictionary, we may not want to send it fully and instead expose an endpoint for each id and in this way only transfer what's strictly necessary.
The mapping from role ids to role names is a huge JSON file generated at build time. It weighs about 1MB (about half when gzipped) which is more than the entire web app, and really slows down loading.
One alternative is to load it asynchronously, but at least a naive implementation of this resulted in flashing role ids before actually being decoded to names.
There are many ways in which this can be optimized.