this issue would not be closed in a single task, as it would collect a variety of improvement and tests we should run. Infrastructure remodeling and #149 are connected.
as following up a chat with @djfm, the current server design should need three major step before addressing optimization:
having ready a tool to do stress testing (web+DB actions)
having the alert of resources exhaustion ready (best case scenario, integrated on slack + email)
having a performance measurement in place (backend and via puppeteer)
remember that we've to document these improvements. We're transforming a working prototype on a stable product, and this is a nice story we should tell.
When these points are ready, we might look for the breaking points of the SW, considering a bunch of easy optimization are:
[ ] For public API that uses complex mongodb aggregation pipeline, use the caching system like in backend/routes/public.js
[ ] Open DB connection at the startup and manage a pool of existing connections, increase also maxConnectionPool
[ ] Use nginx caching for good
[ ] Review if the mongoDB indexes are actually what is used
[ ] Read mongodb.log and optimize the kind of reporting made there
[ ] Insert threshold for API that triggers actions from our server
Tested with 5 calls in parallel to the backend, (80% correct ID and 20% with an invalid ID); the response time always remain around 0.5seconds and any activity in parallel did not show any slowdown.
this issue would not be closed in a single task, as it would collect a variety of improvement and tests we should run. Infrastructure remodeling and #149 are connected.
as following up a chat with @djfm, the current server design should need three major step before addressing optimization:
When these points are ready, we might look for the breaking points of the SW, considering a bunch of easy optimization are:
backend/routes/public.j
smaxConnectionPool
... and likely more!