On sanit.oahpa.no, search takes anywhere from ~1.4 up to as high as over 8 seconds.
Some early hypotheses that are now rejected:
Server is overloaded, leading to queuing. -No, not really. The server can be under very low load, and searches still takes a long time.
The old architecture is just faster -Maybe? But... Cannot find any indication that the code running in python2 + fastcgi is any faster at any amount of load than the newer version of the code running in python3 + gunicorn. If anything, in local benchmarks, the old setup it is slightly slower, and scales worse with high traffic than the new setup. Still though, it operates slightly differently, and the problem started after the shift, so one is lead to believe that there must be something there...
Oddities: The other services, running the same app, and the same code (but with different configurations), appear to not be this slow.
Step 1: Reproduce it locally
The repository is cloned, and run through gunicorn, as it is in production. Initial testing running searches yielded times of in the order of 140ms.
One slowdown is when the app finds "lang-sme/src/analyser-dict-gt-desc.hfstol", "lang-sme/src/generator-dict-gt-norm.hfstol", after it has been configured properly. Without these files, the app gives no indication that they are not found, unless a "configuration check" tool is run. It silently runs as much of the search as it can, without having these files.
With these files available, and the configuration setup properly, a request for a search goes up to take about ~2.4 seconds on the initial request, and subsequent search for the same word will take ~1.4 seconds, due to the searches being cached.
So this is an area where we can optimize, but it still does nowhere near explain the >3 second search times we see on the production server.
Conclusion so far: Have not been able to reproduce it locally.
On sanit.oahpa.no, search takes anywhere from ~1.4 up to as high as over 8 seconds.
Some early hypotheses that are now rejected:
Oddities: The other services, running the same app, and the same code (but with different configurations), appear to not be this slow.
Step 1: Reproduce it locally
The repository is cloned, and run through
gunicorn
, as it is in production. Initial testing running searches yielded times of in the order of 140ms.One slowdown is when the app finds "lang-sme/src/analyser-dict-gt-desc.hfstol", "lang-sme/src/generator-dict-gt-norm.hfstol", after it has been configured properly. Without these files, the app gives no indication that they are not found, unless a "configuration check" tool is run. It silently runs as much of the search as it can, without having these files.
With these files available, and the configuration setup properly, a request for a search goes up to take about ~2.4 seconds on the initial request, and subsequent search for the same word will take ~1.4 seconds, due to the searches being cached.
So this is an area where we can optimize, but it still does nowhere near explain the >3 second search times we see on the production server.
Conclusion so far: Have not been able to reproduce it locally.