kontent-ai / delivery-sdk-net

Kontent.ai Delivery .NET SDK
https://www.nuget.org/packages/Kontent.Ai.Delivery
MIT License
32 stars 41 forks source link

DeliveryClientCache.GetItemsAsync generates different cache keys for same query parameters #362

Closed n33470 closed 1 year ago

n33470 commented 1 year ago

Brief bug description

We use the DeliveryClientCache with a distributed redis cache. In our runtime environment we an API that retrieves content items from Kontent.AI. The runtime environment uses multiple servers to host the API.

What we've noticed is that each server generates a different cache key in redis for the same content items, using the same query parameters. The same behavior occurs when different developers run the same API on their own local machines.

The reason this is happening is because the CacheHelpers.GetItemKey uses the following code in the cache key: typeof(T).GetHashCode().ToString(). The hashcode for the type becomes a different number from each server.

The way this becomes a problem is that we're trying to invalidate the cache item using the Kontent.AI webhook to signal a new publish of a content item. The webhook is processed on one of the servers in the cluster and cannot remove all of the cache entries for that item. The webhook can only invalidate the cache for the entries that are generated from that server.

Repro steps

Generate a call to DeliveryClientCache.GetItemsAsync() using the exact generic object type, and the exact same query parameters from two different machines. You should notice that each machine creates their own cache entries.

Expected behavior

The redis cache entries will use the same cachekey for the exact same content type with the exact same query parameters, regardless of the server that generated the cached entry.

Test environment

Simply007 commented 1 year ago

replated to #234

Simply007 commented 1 year ago

see https://stackoverflow.com/questions/6114772/gethashcode-gives-different-results-on-different-servers/6114944#6114944

Simply007 commented 1 year ago

Hello @n33470,

Thank you for the issue! I have linked some related resources.

We will take a look at what are options are. And evaluate the original idea and current effects of including hash code to the cache key.

I guess the workaround would be to override the GetHashCode for the respective models using Model customization via partial classes.

Simply007 commented 1 year ago

Note: Discuss potential resolution with @vit-svoboda

n33470 commented 1 year ago

In my opinion, the fix should be to CacheHelpers so that it does not use the GetHashCode() of the type being cached. In my opinion, it should simply be the className of the type being cached. Using this instead of the hashcode typeof(T).Name.ToLower() in CacheHelpers

vit-svoboda commented 1 year ago

I agree, though I think we need to use FullName instead, as we have no control over the types passed in the T and there may be 2 type of the same name in different namespaces.

dzmitryk-kontent-ai commented 1 year ago

Hi @n33470 , the problem itself is already fixed, but we had a problem with publishing new package version to NuGet because of known problem error NETSDK1194: The "--output" option isn't supported when building a solution. As it should be fixed with March release of .NET SKD, we decided to wait until it. We are sorry for this delay.

dzmitryk-kontent-ai commented 1 year ago

Problem with cache keys for distributed cache is fixed and released to NuGet with 17.3.0 version