When users load or navigate to Discover, multiple requests are triggered upfront before the initial search request is sent. These requests, primarily checks for data and data view availability, are currently processed sequentially, contributing to a slower perceived loading time and a less responsive user experience.
Proposed Improvements:
Parallelize Initial Requests
Currently, sequential processing of these checks adds significant delay (e.g., 4 requests taking 300ms each results in a total loading time of 1200ms).
By refactoring the logic to initiate these requests in parallel, we can cut this down to the time of the longest request (e.g., reducing 1200ms to 300ms).
Implement Caching:
Introduce a caching strategy (e.g., SWR caching, similar to what we use for DataView field_caps requests) to store the results of these checks.
Subsequent requests can retrieve data from the cache, significantly improving UI performance (e.g., reducing a 300ms request to 30ms or less).
Optimize Redundant Requests:
Evaluate the necessity of sending two separate requests to check for data and data views.
While the "no data" state is useful, for the majority of users with data, this results in unnecessary delays.
Consider showing the "no data" state only once during a fresh session or persisting a "has data" state in local storage, thereby reducing the need for repeated checks and improving overall UI responsiveness.
By implementing these optimizations, we can significantly reduce waiting times, leading to a faster, more efficient experience for users.
When users load or navigate to Discover, multiple requests are triggered upfront before the initial search request is sent. These requests, primarily checks for data and data view availability, are currently processed sequentially, contributing to a slower perceived loading time and a less responsive user experience.
Proposed Improvements: