Open csadorf opened 4 years ago
This has been avoided, partially, by introducing response caching through CacheControl
(see PR #161).
How does #161 resolve the primary issue of blocking on timeouts?
The request responses are cached, meaning there will be no timeouts. It doesn't solve the issue of the original request, nor what happens when the cache is flushed.
The timeout is specified explicitly for all requests (currently 10 s), after which the connection will raise a timeout, which will in all cases be handled within a try/except
block.
Currently, the startup time is not considered substantial, especially not with the upgraded Voilà v0.2.x.
However, the question still remains for running this client outside Voilà, as well as when/if "many" providers will start offering OPTIMADE APIs through the OPTIMADE providers list.
Note, if the requests should be converted to asynchronous requests, the actual waiting time until the application is useful will not change. At least not in the current design.
Thank you for the explanation. While I'm sure that the caching will reduce some waiting times and server load, I do not consider this issue resolved.
So your solution would still be to acknowledge the waiting/startup time will not change, but prefer the widgets loading in, locking the UI concerning user inputs until all requests have been performed? Instead of doing it the current way, where one simply waits for the application to start, and when the UI is present, it's ready for user input?
So your solution would still be to acknowledge the waiting/startup time will not change, but prefer the widgets loading in, locking the UI concerning user inputs until all requests have been performed? Instead of doing it the current way, where one simply waits for the application to start, and when the UI is present, it's ready for user input?
From a UX perspective it is much preferable to keep the UI responsive. Blocking requests freeze the UI which is not only frustrating to users, but will in many cases lead to the reasonable assumption that the app has crashed.
Furthermore, with the current behavior, the start up of the app can freeze for a significant amount of time (I consider anything more than just a few seconds unacceptable, let alone more than 10 seconds). Unless anything significant has changed, the UI will actually freeze during start up for about half a minute when servers are down, because there are multiple requests with individual timeout. The vast majority of users (including me until I figured the issue) will assume that the app has crashed and not wait that long.
For the latter part, this is hopefully where the cacheing will come in handy - but that will of course depend on a number of things, and will not solve this issue.
I hear you, and I will keep the issue open for tracking this issue until a viable solution or final decision presents itself. Hopefully this interaction also clarifies the issue further for future reference :)
Problem description
Currently the client is issuing a number of blocking requests to remote endpoints during initialization and I assume also during operation. This can lead to major degradation in function if for whatever reason these endpoints are slow or even completely unreachable.
Suggested solution
Convert all requests into non-blocking asynchronous requests to avoid these issues. It may be necessary to introduce a "locked" state for any front end widgets that indicate to the user that the widget is working in the background and to prevent the user from issuing additional inputs.
Note: I've labeled this issue as a bug and not as a feature request, because it can lead to a significant enough degradation in function that the widget should be considered as non-functional.