Closed SillyMoo closed 1 year ago
great point. the default cache seems to be InMemoryCache
, and I never invalidate the cache, so Todo
might be stale.
in production, you probably would want a shared cache across requests, but would want to invalidate the cache after an updateTodo
to prevent stale data.
given the scope of this repo is just a simple demonstration of wiring a GqlGen server with graph-gophers/dataloader, I think I'll update the sample code to explicitly use a NoCache.
Thoughts @SillyMoo ?
I would argue that the Facebook dataloader model is purely a per-request cache/batching. A cache across requests should be implemented at a different layer. The reason for this is invalidation only works on a single node, so if I modify the task on one micro-service (for example) and invalidate the cache in the process, it won't affect the cache another instance, which may still see the wrong version of the task.
From https://github.com/graphql/dataloader: To get started, create a DataLoader. Each DataLoader instance represents a unique cache. Typically instances are created per request when used within a web-server like express if different users can see different things.
And an explanation of my suggestion of layering of cache's, with DataLoader being the per-request layer, can be found here: https://june2.github.io/posts/DataLoader/.
Funny enough, that's how I had implemented it in the beginning and changed it for another user in this pull request.
FWIW, we use gqlgen at my company, and we do spin up a dataloader for each request.
Following my chat with the other contributor, I was of the opinion it was "simpler" to have a single data loader, but you've convinced me that the per-request instance is more realistic.
I'm going to revert the code to a request-scoped loader and also use NoCache
.
Thanks, @SillyMoo
No Thank You for your project @zenyui, it helped get me started with graph-gophers/dataloader
which I am now using in an internal project for my current employer.
If the data is truly dynamic then this will fail, effectively always falling back to the same cached result every time. So for example, if the app supported a 'updateTodo' endpoint, then next time we come in and get the TODO we'll still return the old version.
This is because you use a single cache across the whole system, not one per-request.