llfbandit / dio_cache_interceptor

Dio HTTP cache interceptor with multiple stores respecting HTTP directives (ETag, Last-Modified, Cache-Control) with options.
https://pub.dev/packages/dio_cache_interceptor
121 stars 70 forks source link

Add support for "soft" stale #49

Closed yringler closed 2 years ago

yringler commented 2 years ago

Could an option be added for maxStaleSoft which would only try to get new data after duration has been reached, and even then, it would return stale data if it isn't able to get new data? I guess maxStaleSoft would be either a Duration (which would be overridden by maxStale), or a boolean (which would modify maxStale) Or, a more general option alwaysAllowCacheFallback which would cause that whatever the cause, if cache is expired but we don't have new data, return cached data.

:bulb: if cache is expired but we can't now get new data, return cached data Is that already how the interceptor works? Is that a reasonable default?

My use case:

I have HTTP endpoints (for audio classes relevant to today etc) whose content

  1. Might be outdated in 3 hours*
  2. Is always relevant - yesterdays class doesn't become wrong just because a day has passed

I would like the cached response to always be returned if it has been cached for less then 3 hours. After 3 hours - try to update cache, returned cached data if the http request fails (most likely, because of no data connection)

*) A more or less arbitrary number. The assumption is that if user has yesterdays class for 3 extra hours it's fine.

If you think you might get to this soon, great! If this isn't a fit for this project, also great! I'll figure something out. If you think it would be a nice feature, but don't know if you'll ever get around to it - I'll take a look and consider if it would take less time to open a PR or use a different caching method.

llfbandit commented 2 years ago

Thanks for your proposal.

If I understand you well, you manage the cache with the client only. For now, the content will be deleted if staled and your concern is about the transfered data volume.

Maybe a simplier solution is to provide a new field to postpone maxStale when you request for the content to skip the deletion. In this way, we could mimic standard HTTP cache header updates on this field.

In fact, you're anticipating a bit what is coming, a new version is under contruction and I was wondering if it was a good idea to act on maxStale.

Now I know! 👍

Does this solution suit your needs?

yringler commented 2 years ago

If I understand you well, you manage the cache with the client only. For now, the content will be deleted if staled and your concern is about the transfered data volume.

💯 I'm also concerned about decreasing load on the server.

Maybe a simplier solution is to provide a new field to postpone maxStale when you request for the content to skip the deletion.

I don't follow exactly. You mean, maxStale would return stale data unless it has newer data cached?

In this way, we could mimic standard HTTP cache header updates on this field.

Are you referring to the stale-while-revalidate Cache-Control directive?

Does this solution suit your needs?

If I understand correctly (and probably also if not tbh) yes.

llfbandit commented 2 years ago

I don't follow exactly. You mean, maxStale would return stale data unless it has newer data cached?

This is always the responsability of the client to tune maxStale since we're out of HTTP conventions. So no, the idea is to be able to postpone the deadline on each request.

Example:

Currently it would stale at 3 without any possible recovery.

yringler commented 2 years ago

You request again the same content at 2. With the potential new behaviour, content would stale at 4.

Ah, so each time you make a request, it pushes off deletion. A sliding expiration, where the cache expires if it hasn't been used in X seconds.

But I want to get new data, either with request 2 or 3 (in your example). With your design, it seems that if I allow postponing for a week, the user might be stuck with old data for the whole week.

With what you had in mind - in your example, if I request content at 1, and 2, and 3, and 4, and 5 - would any of those requests get fresh data from the server?

llfbandit commented 2 years ago

No, valid cached response is always resolved before requesting with forceCache policy. How could you differentiate newer vs. older data on same request?

yringler commented 2 years ago

No, valid cached response is always resolved before requesting with forceCache policy.

Ah, so if I want a new HTTP request to be made at maxStale, but old data to be used until that request is successful - what combination of options is intended?

I thought of CachePolicy.request, but seems from docs that works with response headers? Wasn't sure if it would treat the maxStale option the same.

So- looks like I'd want CachePolicy .request, with a max stale, which would use the cache until maxStale, and continue using the cache until the HTTP request gets a 200 status

EDIT:

How could you differentiate newer vs. older data on same request?

Note that it's fine for me if the "new" data is same as the old - it's ok to keep using that "new" data until maxStale, and check again.

yringler commented 2 years ago

So it seems that, because my API isn't returning APIs, there isn't a way to say "cache for 30 minutes, then delete cache" If I set CachePolicy.request, it doesn't cache at all. If I set forceCache, it ignores maxStale and never expires.

Maybe there should be a CachePolicy.assumeCachable or something, which means "assume that the response could be cached, even if the API doesn't say so, and make it stale based on the maxStale option"

It might be easier for me to add an interceptor to add cache headers? But I'll try forking and opening a PR.

yringler commented 2 years ago

I tried modifying the interceptor, but ended up just using

EDIT: I thought this was working? But now something isn't working right, iDK

class AddCacheHeaders extends Interceptor {
  // final Duration maxAge;

  // AddCacheHeaders({required this.maxAge});

  @override
  void onResponse(
    Response response,
    ResponseInterceptorHandler handler,
  ) {
    final options = CacheOptions.fromExtra(response.requestOptions);

    if (options?.maxStale != null && options!.maxStale! > Duration.zero) {
      response.headers.add(
          'Cache-Control', 'public, max-age=${options.maxStale!.inSeconds}');
    }

    super.onResponse(response, handler);
  }
}
llfbandit commented 2 years ago

Your assumptions are wrong. You should use forceCache policy since you don't have cache directives from your origin server.

When using cache with client only, just remember that it is your responsability to handle the cache expiry.

So a potential workflow for you:

yringler commented 2 years ago

I'll have to try that. I didn't realize I could check the cache store myself. How do I do that? Does the interceptor expose a method for that?

On Fri, Jan 7, 2022, 6:37 AM llfbandit @.***> wrote:

Your assumptions are wrong. You should use forceCache policy since you don't have cache directives from your origin server.

When using cache with client only, just remember that it is your responsability to handle the cache expiry.

So a potential workflow for you:

  • Check the store if your entry is in cache and not staled.
  • If cache is invalid => forceCache with maxStale.
  • If cache is valid => forceCache with or without new maxStale depending of the behaviour you want (available in v3.2.1 just released).

— Reply to this email directly, view it on GitHub https://github.com/llfbandit/dio_cache_interceptor/issues/49#issuecomment-1007340642, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7B3KP2FDUKALSS7X2HIM3UU3F6PANCNFSM5LGPTVLQ . You are receiving this because you authored the thread.Message ID: @.***>

llfbandit commented 2 years ago

To use this package you must provide a store (MemCacheStore, FileCacheStore, ...) you need to keep a reference of it to access it. Its API is quite simple: get, set, exists, ... The only paintful code to write is how you recover the key, This is _uuid.v5(Uuid.NAMESPACE_URL, request.uri.toString()); by default.

yringler commented 2 years ago

Fun! Thank you. Maybe have that as a static method somewhere, and mention in the ForceCache doc?

On Fri, Jan 7, 2022, 8:40 AM llfbandit @.***> wrote:

To use this package you must provide a store (MemCacheStore, FileCacheStore, ...) you need to keep a reference of it to access it. Its API is quite simple: get, set, exists, ... The only paintful code to write is how you recover the key, This is _uuid.v5(Uuid.NAMESPACE_URL, request.uri.toString()); by default.

— Reply to this email directly, view it on GitHub https://github.com/llfbandit/dio_cache_interceptor/issues/49#issuecomment-1007416100, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7B3KO2LX2IX5HM6DE3YKLUU3UNTANCNFSM5LGPTVLQ . You are receiving this because you authored the thread.Message ID: @.***>

llfbandit commented 2 years ago

Well, yes and no, each use case is different when coming to cache with client only. Most of the time people are abusing of the term "cache". Mostly, all wants to retrieve data when offline or/and as long as they need. This is really business concern.

yringler commented 2 years ago

Most of the time people are abusing of the term "cache". Mostly, all wants to retrieve data when offline or/and as long as they need. This is really business concern.

Guilty as charged :laughing:

I think the reason it's tempting in flutter is because there's no reflection. Usually, there'd be a cache system and I'd throw my object at it without thinking twice. With flutter, because we need code generators etc, everything is very deliberate. So, if I need a simple cache - just caching the response is the simplest way to get that.

yringler commented 2 years ago

In the end I have to agree with you - I needed something else, and in the end I'm using a simple JSON file cache for the data if user is offline.