linvon / cuckoo-filter

Cuckoo Filter go implement, better than Bloom Filter, configurable and space optimized 布谷鸟过滤器的Go实现,优于布隆过滤器,可以定制化过滤器参数,并进行了空间优化
MIT License
294 stars 28 forks source link
bloom bloom-filter bloomfilter configurable cuckoo cuckoo-filter cuckoofilter go

cuckoo-filter

Mentioned in Awesome Go

cuckoo-filter go implement. Config by you

transplant from efficient/cuckoofilter

中文文档

Overview

Cuckoo filter is a Bloom filter replacement for approximated set-membership queries. While Bloom filters are well-known space-efficient data structures to serve queries like "if item x is in a set?", they do not support deletion. Their variances to enable deletion (like counting Bloom filters) usually require much more space.

Cuckoo filters provide the flexibility to add and remove items dynamically. A cuckoo filter is based on cuckoo hashing (and therefore named as cuckoo filter). It is essentially a cuckoo hash table storing each key's fingerprint. Cuckoo hash tables can be highly compact, thus a cuckoo filter could use less space than conventional Bloom filters, for applications that require low false positive rates (< 3%).

For details about the algorithm and citations please use:

"Cuckoo Filter: Practically Better Than Bloom" in proceedings of ACM CoNEXT 2014 by Bin Fan, Dave Andersen and Michael Kaminsky

Implementation details

The paper cited above leaves several parameters to choose.

  1. Bucket size(b): Number of fingerprints per bucket
  2. Fingerprints size(f): Fingerprints bits size of hashtag

In other implementation:

In this implementation, you can adjust b and f to any value you want in TableTypeSingle type implementation.

In addition, the Semi-sorting Buckets mentioned in paper which can save 1 bit per item is also available in TableTypePacked type, note that b=4, only f is adjustable.

Why custom is important?

According to paper

when r > 0.002, having two entries per bucket yields slightly better results than using four entries per bucket; when decreases to 0.00001 < r ≤ 0.002, four entries per bucket minimizes space.

with a bucket size b, they suggest choosing the fingerprint size f using

f >= log2(2b/r) bits

as the same time, notice that we got loadfactor 84%, 95% or 98% when using bucket size b = 2, 4 or 8

To know more about parameter choosing, refer to paper's section 5

Note: generally b = 8 is enough, without more data support, we suggest you choosing b from 2, 4 or 8. And f is max 32 bits

Example usage:

package main

import (
    "fmt"
    "github.com/linvon/cuckoo-filter"
)

func main() {
    cf := cuckoo.NewFilter(4, 9, 3900, cuckoo.TableTypePacked)
    fmt.Println(cf.Info())
    fmt.Println(cf.FalsePositiveRate())

    a := []byte("A")
    cf.Add(a)
    fmt.Println(cf.Contain(a))
    fmt.Println(cf.Size())

    b := cf.Encode()
    ncf, _ := cuckoo.Decode(b)
    fmt.Println(ncf.Contain(a))

    cf.Delete(a)
    fmt.Println(cf.Size())
}