Currently when running in production pm2 is used as a load balancer. This is problematic because
When the API is started the process queries and stores in memory some common results (all columns)
The API is also a tileserver, and the tileserver has memory caching for recent requests
pm2 load balances by starting multiple instances of the API and spreading requests between them
Thus, instances of the API have their own caches and are not able to share a common cache
This results in much more memory than necessary being used and slows down tile requests. For example, if a given tile is requested on API instance 1 (API1), API1 will cache it in memory as long as it has room for it in its given allocation of memory for tile caching. If another request comes in for that same tile but pm2 hands the request to API instance 2 the tile will need to be generated from scratch because API2 is completely unaware of API1's cache.
Instead of this janky method of scaling the API should be refactored to use the Nodejs cluster API so that caches can be shared between processes, and more processes can easily be spun up.
pm2 actually uses the Nodejs cluster API, so instead the caching mechanism of the API is going to be implemented with Redis, which will allow n processes to share the same persistent cache.
Currently when running in production pm2 is used as a load balancer. This is problematic because
This results in much more memory than necessary being used and slows down tile requests. For example, if a given tile is requested on API instance 1 (API1), API1 will cache it in memory as long as it has room for it in its given allocation of memory for tile caching. If another request comes in for that same tile but pm2 hands the request to API instance 2 the tile will need to be generated from scratch because API2 is completely unaware of API1's cache.
Instead of this janky method of scaling the API should be refactored to use the Nodejs cluster API so that caches can be shared between processes, and more processes can easily be spun up.