Open BeritJanssen opened 1 year ago
Nope, I think you already covered the options.
Maybe caching could make this problem less severe but your options are a more structural solution.
As a short term solution, it may also be worth it to go over the code (e.g. #901). The word model views were not developed for models with such a significant loading time, so they are not necessarily efficient.
Anyway, I just wanted to point out that we are planning to convert the backend to django, so perhaps hold off on a solution that is very flask-specific.
Currently, for every request, we reload the word models from source files. This may decrease performance - perhaps we want to preload the models when the word_models path is requested for a given corpus for the first time, and then persist the object, so it can be directly queried. Options to do so:
current_app
context (see this thread)@jgonggrijp , @JeltevanBoheemen , @oktaal any other ideas or suggestions?