coq-community / chapar

A framework for verification of causal consistency for distributed key-value stores and their clients in Coq [maintainer=@palmskog]
MIT License
32 stars 7 forks source link
causal-consistency coq coq-extraction distributed-systems docker-coq-action key-value nix-action ocaml

Chapar

Docker CI Nix CI Contributing Code of Conduct Zulip DOI

A framework for modular verification of causal consistency for replicated key-value store implementations and their client programs in Coq. Includes proofs of the causal consistency of two key-value store implementations and a simple automatic model checker for the correctness of client programs.

Meta

Building and installation instructions

The easiest way to install the latest released version of Chapar is via OPAM:

opam repo add coq-released https://coq.inria.fr/opam/released
opam install coq-chapar

To instead build and install manually, do:

git clone https://github.com/coq-community/chapar.git
cd chapar
make   # or make -j <number-of-cores-on-your-machine> 
make install

Chapar Executable Key-value Stores

Three key-value stores, verified to be causally consistent in the Coq proof assistant and extracted to executable code. See here for the requirements to build the stores.

Documentation

Coq Framework

The Coq definitions and proofs are located in the theories directory. The code location of the definitions and lemmas presented in the paper are listed below.

Semantics and the Proof Technique

Algorithms

Clients

Experiment Setup

Directory structure

Running Experiments

Overview

We run with 4 nodes called the worker nodes and a node called the master node that keeps track of the start and end of the runs. The scripts support running with both the current terminal blocked or detached. In the former, the terminal should be active for the entire execution time. To avoid this, we use another node called the launcher node. Repeating and collecting the results of the runs is delegated to the launcher node. The terminal can be closed and the execution results can be retrieved later from the launcher node. The four workers, the master and the launcher can be different nodes. However, to simplify running, the scripts support assigning one host to all of these roles. This is the default setting.

The settings of the nodes can be edited in the file Settings.txt. The following should be noted if other machines are used as the running nodes. The host should have password-less ssh access to the launcher node. The launcher node should have password-less ssh access to the other nodes. This can be done by copying the public key of the accessing machine to the accessed machine by a command like:

cat ~/.ssh/id_dsa.pub | ssh -l remoteuser remote.example.com 'cat >> ~/.ssh/authorized_keys'

The port numbers 9100, 9101, 9102, and 9103 should be open on the worker nodes 1, 2, 3 and 4 respectively. The port number 9099 should be open on the master node.

A simple run

To start the run:

./batchrundetach

To check the status of the run:

./printlauncherout

To get the results once the run is finished:

./fetchresults

The result are stored in the file RemoteAllResults.txt. See fetchresults below for the format of the results.

Settings and scripts

All of the following files are in the root directory:

The experiments in the paper

The goal of our experimental result section was to show that our verification effort can lead to executable code and also to compare the performance of the two algorithms. As described in the paper, the experiments were done with four worker nodes cluster. Each worker node had an Intel(R) Xeon(R) 2.27GHz CPU with 2GB of RAM and ran Linux Ubuntu 14.04.2 with the kernel version 3.13.0-48-generic#80-Ubuntu. The nodes were connected to a gigabit switch. The keys were uniformly selected from 0 to 50 for the benchmarks. Each experiment was repeated 5 times. (The reported numbers are the arithmetic mean of the five runs.) Each node processed 60,000 requests. The put ratio ranged from 10% to 90%.

Here are the contents of the two configuration files, Settings.txt and batchrun: Note: user and ip should be filled with specific values.

Interpretation of results from the paper

As expected, the throughput of both of the stores increases as the ratio of the get operation increases. The second algorithm shows a higher throughput than the first algorithm. The reason is twofold. Firstly, in the first algorithm, the clock function of a node keeps an over-approximation of the dependencies of the node. This over-approximation incurs extra dependencies on updates. On the other hand, the second algorithm does not require any extra dependencies. Therefore, in the first algorithm compared to the second, the updates can have longer waiting times, and the update queues tend to be longer. Therefore, the traversal of the update queue is more time consuming in the first algorithm than the second. Secondly, the update payload that is sent and received by the first algorithm contains the function clock. OCaml cannot marshal functions. However, as the clock function has the finite domain of the participating nodes, it can be serialized to and deserialized from a list. Nonetheless, serialization and de-serialization on every sent and received message adds performance cost. On the other hand, the payload of the second algorithm consists of only data types that can be directly marshalled. Therefore, the second algorithm has no extra marshalling cost.