deductio / server

0 stars 0 forks source link

Cache responses across service #1

Open Flarp opened 2 months ago

Flarp commented 2 months ago

As of right now, all requests go directly from an NGINX reverse proxy to the main Rocket application, which then interfaces with a Postgres backend. This service is likely to be read-heavy rather than write-heavy, so some additional infrastructure could possibly greatly decrease response time.

Flarp commented 1 month ago

With the inclusion of a Redis cache, it makes little sense to use a Varnish cache as this information will already be available in memory. The Redis cache will be used instead for both registered and unregistered users, to avoid needing to maintain multiple infrastructures with the same purpose.

Moreover, the Redis cache should also cache the current trending graphs of the day, week, month, and all time, as these computations are expensive (requires querying the entire likes database for day-week-month, but the like_count column in the knowledge graph table can be used for all time, which can save some computation time). It should be cleared about every 15 minutes (using some kind of Redis TTL) in order to be relatively up to date.