Kraken is a P2P-powered Docker registry that focuses on scalability and availability. It is designed for Docker image management, replication, and distribution in a hybrid cloud environment. With pluggable backend support, Kraken can easily integrate into existing Docker registry setups as the distribution layer.
Kraken has been in production at Uber since early 2018. In our busiest cluster, Kraken distributes more than 1 million blobs per day, including 100k 1G+ blobs. At its peak production load, Kraken distributes 20K 100MB-1G blobs in under 30 sec.
Below is the visualization of a small Kraken cluster at work:
Following are some highlights of Kraken:
The high-level idea of Kraken is to have a small number of dedicated hosts seeding content to a network of agents running on each host in the cluster.
A central component, the tracker, will orchestrate all participants in the network to form a pseudo-random regular graph.
Such a graph has high connectivity and a small diameter. As a result, even with only one seeder and having thousands of peers joining in the same second, all participants can reach a minimum of 80% max upload/download speed in theory (60% with current implementation), and performance doesn't degrade much as the blob size and cluster size increase. For more details, see the team's tech talk at KubeCon + CloudNativeCon.
The following data is from a test where a 3G Docker image with 2 layers is downloaded by 2600 hosts concurrently (5200 blob downloads), with 300MB/s speed limit on all agents (using 5 trackers and 5 origins):
All Kraken components can be deployed as Docker containers. To build the Docker images:
$ make images
For information about how to configure and use Kraken, please refer to the documentation.
You can use our example Helm chart to deploy Kraken (with an example HTTP fileserver backend) on your k8s cluster:
$ helm install --name=kraken-demo ./helm
Once deployed, every node will have a docker registry API exposed on localhost:30081
.
For example pod spec that pulls images from Kraken agent, see example.
For more information on k8s setup, see README.
To start a herd container (which contains origin, tracker, build-index and proxy) and two agent containers with development configuration:
$ make devcluster
Docker-for-Mac is required for making dev-cluster work on your laptop. For more information on devcluster, please check out devcluster README.
Dragonfly cluster has one or a few "supernodes" that coordinates the transfer of every 4MB chunk of data in the cluster.
While the supernode would be able to make optimal decisions, the throughput of the whole cluster is limited by the processing power of one or a few hosts, and the performance would degrade linearly as either blob size or cluster size increases.
Kraken's tracker only helps orchestrate the connection graph and leaves the negotiation of actual data transfer to individual peers, so Kraken scales better with large blobs. On top of that, Kraken is HA and supports cross-cluster replication, both are required for a reliable hybrid cloud setup.
Kraken was initially built with a BitTorrent driver, however, we ended up implementing our P2P driver based on BitTorrent protocol to allow for tighter integration with storage solutions and more control over performance optimizations.
Kraken's problem space is slightly different than what BitTorrent was designed for. Kraken's goal is to reduce global max download time and communication overhead in a stable environment, while BitTorrent was designed for an unpredictable and adversarial environment, so it needs to preserve more copies of scarce data and defend against malicious or bad behaving peers.
Despite the differences, we re-examine Kraken's protocol from time to time, and if it's feasible, we hope to make it compatible with BitTorrent again.
docker pull
. To speed up docker pull
, consider
switching to Makisu to improve layer reusability at build time, or
tweak compression ratios, as docker pull
spends most of the time on data decompression.latest
tag) is allowed, however, a few things will not work: tag
lookups immediately afterwards will still return the old value due to Nginx caching, and replication
probably won't trigger. We are working on supporting this functionality better. If you need tag
mutation support right now, please reduce the cache interval of the build-index component. If you also need
replication in a multi-cluster setup, please consider setting up another Docker registry as Kraken's
backend.Please check out our guide.
To contact us, please join our Slack channel.