groupcache / groupcache-go

A high performance in memory distributed cache
Apache License 2.0
18 stars 0 forks source link

Support different cache implementations #4

Closed thrawn01 closed 5 months ago

thrawn01 commented 5 months ago

Purpose

To support alternative hot and main cache implementations that may have better performance under high concurrency workloads.

Implementation

Usage

import "github.com/groupcache/groupcache-go/v3/contrib"

// Create a new groupcache instance with a custom cache implementation
instance := groupcache.New(groupcache.Options{
    CacheFactory: func(maxBytes int64) (groupcache.Cache, error) {
        return contrib.NewOtterCache(maxBytes)
    },
    HashFn:    fnv1.HashBytes64,
    Logger:    slog.Default(),
    Transport: t,
    Replicas:  50,
})

See #1

thrawn01 commented 5 months ago

This is ready for review. @udhos, @gedw99 @Tochemey @Jvb182 @Baliedge @MatthewEdge

I'm also pinging @maypok86 to thank him for the wonderful otter cache! (I hope I'm using it correctly 😄 )

Please review and let me know what I missed or broke.

Tochemey commented 5 months ago

@thrawn01 Also for the cache rejected stats, is it a standard practice out there to reject item? If not I think then we are leaking otter custom feature here.

maypok86 commented 5 months ago

@thrawn01 Also for the cache rejected stats, is it a standard practice out there to reject item? If not I think then we are leaking otter custom feature here.

You can, it's your decision. I will even be a little glad, since I was not ready for the popularity of the otter. But I would like to tell you a little about the rejection. There are several ways in which this can happen in different caches.

  1. The ristretto approach. We simply discard the inserted item if a high contention is detected. This is a very rare approach and very controversial. To be honest, I really don't like it.
  2. Rejection of an item that is too large. This is quite common, because the insertion of such an item will lead to a very strong drop in the hit rate. This is used in otter.
thrawn01 commented 5 months ago

@thrawn01 Also for the cache rejected stats, is it a standard practice out there to reject item? If not I think then we are leaking otter custom feature here.

Yes it's pretty common.

I wrote a section in the README called Cache Size Implications to make users aware of the possibility of rejections. TLDR; Since we are using Otter for both main and hot caches and the hot cache is 1/8th the size of the main cache, and otter's max cost is calculated as 1/10th the max size of the cache it's possible that large items will be rejected from the hot cache.

I considered making Otter the default cache implementation. But your reaction confirms my concern that users might be caught off guard, not understand why the hot cache might have a higher miss ratio than they expect.

@thrawn01 I believe like the discovery the contrib cache should go into another repository because of the different cache integration developers may want to add.

It's such a small include, it doesn't need it's own repo. Also, thanks to go mods Module graph pruning. Otter should not become a dependency of your project if contrib isn't used by your project. It also becomes a great example of how to add your own third-party cache!