isomerpages / isomercms-backend

A static website builder and host for the Singapore Government
5 stars 1 forks source link

fix(headerinterpreter): use custom interpreter #774

Closed seaerchin closed 1 year ago

seaerchin commented 1 year ago

Problem

Currently, Isomer uses a huge amount of tokens due to not caching. This leads to scaling issue, where our github token limits are quickly reached and users are unable to access the CMS.

Closes IS-172

Solution

  1. use an external package axios-cache-interceptor to utilise ETags for cache. See here for info on etags.
    • note that this is using in memory storage. the implications are that: we could potentially have some requests that could be cached but are not (current ETag value is 1 -> user A edits -> ETag value now 2 -> other instance doesn't knows -> queries from other instance -> ETag value updated)
    • potential for stale reads - github will tell us if the old etag value is outdated or not (ie, return 200) and the time to live (ttl) of the response using the maxAge header ((in this case, 60s) but we have opted to remove this behaviour and revalidate. this is to avoid github returning 409s when we attempt to update using the old sha
    • usage of a custom headerInterpreter - the change made compared to the default one is that we don't use the maxAge property on the request and instead default to a mustRevalidate behaviour

Tests

  1. first, add a console.log for remainingRequests into respHandler for AxiosInstance.ts, so that we know how much requests are remaining
  2. next, issue a GET request to the e2e-test-repo's homepage (curl --request GET \ --url http://localhost:8081/v2/sites/e2e-test-repo/homepage)
  3. observe the # of requests remaining.
  4. repeat steps 2-3; the # of requests remaining should be unchanged
  5. make an edit on the homepage
  6. repeat steps 2-3, the # of requests should decrease
  7. update the page again
  8. do steps 2-3, the request should succeed and the # of requests should decrease
  9. wait for 60s and reload the page
  10. the request should succeed and the # of requests should stay constant

New dependencies:

Deploy notes

  1. axios got upgraded from 0.21.x to 0.25.0
    • list of breaking changes here
    • went through most of them, mostly typing changes.
  2. had to use 0.9.2 as a minimum for axios-cache-interceptor
    • see compat chart here
    • this is because of 2 reasons - ETag functionality only added in 0.6 + their import for object-code is an outdated one, which doesn't get fixed till 0.9.2.

from @kishore03109 I remember testing this locally prior during the approval process + no good way to test this in staging

seaerchin commented 1 year ago

if we are invalidating on every API call, how exactly is the etag the same for 60 secs ah? wouldnt every call invalidate the sha already?

the sha is different from the etag - the etag is the same not just for 60s, but until the content changes. we are invalidating our internal cached data and revalidating with github using the etag to see if it's our cached data can be reused. if github returns 304, we reuse otherwise, we set the internal cache iwth the new data

hmm checking in, you mean maxAge property on the response right?

oops, yes!

kishore03109 commented 1 year ago

[test]: lets run e2e on this! i think the command is !run-e2e iirc?

seaerchin commented 1 year ago

[test]: lets run e2e on this! i think the command is !run-e2e iirc?

sadly it's only available on the frontend. we'd have to push this to staging first - i'll test this alongside another FE PR.

seaerchin commented 1 year ago

test run here, will double check for any bugs prior to merge

seaerchin commented 1 year ago

double checked failing test cases - all pass manually