armon / bloomd

C network daemon for bloom filters
http://armon.github.io/bloomd
Other
1.24k stars 112 forks source link

Bloomd Build Status

Bloomd is a high-performance C server which is used to expose bloom filters and operations over them to networked clients. It uses a simple ASCII protocol which is human readable, and similar to memcached.

Bloom filters are a type of sketching or approximate data structure. They trade exactness for efficiency of representation. Bloom filters provide a set-like abstraction, with the important caveat that they may contain false-positives, meaning they may claim an element is part of the set when it was never in fact added. The rate of false positives can be tuned to meet application demands, but reducing the error rate rapidly increases the amount of memory required for the representation.

TL;DR: Bloom filters enable you to represent 1MM items with a false positive rate of 0.1% in 2.4MB of RAM.

Features

Install

Download and build from source:

$ git clone https://armon@github.com/armon/bloomd.git
$ cd bloomd
$ pip install SCons  # Uses the Scons build system, may not be necessary
$ scons
$ ./bloomd

This will generate some errors related to building the test code as it depends on libcheck. To build the test code successfully, do the following:

$ cd deps/check-0.9.8/
$ ./configure
$ make
# make install
# ldconfig (necessary on some Linux distros)

Then re-build bloomd. At this point, the test code should build successfully.

For CentOS or RHEL users, the kind folks from Vortex RPM have made a repo available with RPM's.

Usage

Bloomd can be configured using a file which is in INI format. Here is an example configuration file:

# Settings for bloomd
[bloomd]
tcp_port = 8673
data_dir = /mnt/bloomd
log_level = INFO
flush_interval = 300
workers = 2

Then run bloomd, pointing it to that file:

bloomd -f /etc/bloomd.conf

A full list of configuration options is below.

Clients

Here is a list of known client implementations:

Here is a list of "best-practices" for client implementations:

Configuration Options

Each configuration option is documented below:

Protocol

By default, Bloomd will listen for TCP connections on port 8673. It uses a simple ASCII protocol that is very similar to memcached.

A command has the following syntax:

cmd [args][\r]\n

We start each line by specifying a command, providing optional arguments, and ending the line in a newline (carriage return is optional).

There are a total of 11 commands:

For the create command, the format is:

create filter_name [capacity=initial_capacity] [prob=max_prob] [in_memory=0|1]

Note:

  1. capacity must > 10,000 (1e4, 10K)
  2. capacity is suggested <= 1,000,000,000 (1e9, 1G)
  3. prob must < 0.1 (1e-1)
  4. prob is suggested <= 0.01 (1e-2)

Where filter_name is the name of the filter, and can contain the characters a-z, A-Z, 0-9, ., _. If an initial capacity is provided the filter will be created to store at least that many items in the initial filter. Otherwise the configured default value will be used. If a maximum false positive probability is provided, that will be used, otherwise the configured default is used. You can optionally specify in_memory to force the filter to not be persisted to disk.

As an example:

create foobar capacity=1000000 prob=0.001

This will create a filter foobar that has a 1M initial capacity, and a 1/1000 probability of generating false positives. Valid responses are either "Done", "Exists", or "Delete in progress". The last response occurs if a filter of the same name was recently deleted, and bloomd has not yet completed the delete operation. If so, a client should retry the create in a few seconds.

The list command takes either no arguments or a set prefix, and returns information about the matching filters. Here is an example response to a command:

> list foo
START
foobar 0.001 1797211 1000000 0
END

With the list prefix "foo", this indicates a single filter named foobar, with a probability of 0.001 of false positives, a 1.79MB size, a current capacity of 1M items, and 0 current items. The size and capacity automatically scale as more items are added.

The drop, close and clear commands are like create, but only takes a filter name. It can either return "Done" or "Filter does not exist". clear can also return "Filter is not proxied. Close it first.". This means that the filter is still in-memory and not qualified for being cleared. This can be resolved by first closing the filter.

Check and set look similar, they are either:

[check|set] filter_name key

The command must specify a filter and a key to use. They will either return "Yes", "No" or "Filter does not exist".

The bulk and multi commands are similar to check/set but allows for many keys to be set or checked at once. Keys must be separated by a space:

[multi|bulk] filter_name key1 [key_2 [key_3 [key_N]]]

The check, multi, set and bulk commands can also be called by their aliasses c, m, s and b respectively.

The info command takes a filter name, and returns information about the filter. Here is an example output:

START
capacity 1000000
checks 0
check_hits 0
check_misses 0
page_ins 0
page_outs 0
probability 0.001
sets 0
set_hits 0
set_misses 0
size 0
storage 1797211
END

The command may also return "Filter does not exist" if the filter does not exist.

The flush command may be called without any arguments, which causes all filters to be flushed. If a filter name is provided then that filter will be flushed. This will either return "Done" or "Filter does not exist".

Example

Here is an example of a client flow, assuming bloomd is running on the default port using just telnet:

$ telnet localhost 8673
> list
START
END

> create foobar
Done

> check foobar zipzab
No

> set foobar zipzab
Yes

> check foobar zipzab
Yes

> multi foobar zipzab blah boo
Yes No No

> bulk foobar zipzab blah boo
No Yes Yes

> multi foobar zipzab blah boo
Yes Yes Yes

> list
START
foobar 0.000100 300046 100000 3
END

> drop foobar
Done

> list
START
END

Performance

Although extensive performance evaluations have not been done, casual testing on a 2012 MBP with pure set/check operations allows for a throughput of at least 600K ops/sec. On Linux, response times can be as low as 1.5 μs.

Bloomd also supports multi-core systems for scalability, so it is important to tune it for the given work load. The number of worker threads can be configured either in the configuration file, or by providing a -w flag. This should be set to at most 2 * CPU count. By default, only a single worker is used.

References

Here are some related works which we make use of: