This repository will hold a study on the optimization of the Cascade Protocol. This protocol is being studied in order to be used in the Information Reconciliation step of Quantum Key Distribution Post Processing
To create a dataset of pairs of keys with a desired error rate, the following command is available:
cascade-study create_dataset <key length> <error rate> [options]
Available options are:
-s --size <dataset size> Set the number of key pairs in the dataset (default is 10000)
-o --out <output filename> Set the name of the output file (default is <key length)-<error rate>.csv
-nc --num-cores <number of cores> Set the number of processor cores to use to create the dataset
-v --verbose Set the printing of the number of completed tasks to verbose
To run an algorithm for a dataset, the following command is available:
cascade-study run_algorithm <algorithm> <dataset file> [options]
The possible algorithms are original
, biconf
, yanetal
, option7
and option8
. The available options are:
-bi --block-inference Use the Block Parity Inference optimization
-nl --num-lines <dataset size> Set the number of key pairs to process
-r --runs <num runs> Set the number of algorithm runs per key pair (using different seeds). Default = 1
-sl --stats-level <1|2|3> Set the level of stats to output (1 = no stats, 2 = regular output (default), 3 = BER for all iterations)
-o --out <output filename> Set the name of the output file (default is <algorithm>-<key length)-<error rate>.res.csv
-nc --num-cores <number of cores> Set the number of processor cores to use to run the algorithm
-v --verbose Set the printing of the number of completed tasks to verbose
To validate the results a run of the algorithm (verify the integrity or replicability of results):
cascade-study replicate_run <algorithm> <results file> [options]
Available options are:
-bi --block-inference Use the Block Parity Inference optimization
-nl --num-lines <dataset size> Set the number of key pairs to process
-r --runs <num runs> Set the number of algorithm runs per key pair (using different seeds). Default = 1
-sl --stats-level <1|2|3> Set the level of stats to output (1 = no stats, 2 = regular output (default), 3 = BER for all iterations)
-o --out <output filename> Set the name of the output file (default is <algorithm>-<key length)-<error rate>.res.csv
-nc --num-cores <number of cores> Set the number of processor cores to use to run the algorithm
-v --verbose Set the printing of the number of completed tasks to verbose
This will output a replica file with the results of running the algorithm with the same keys and random seeds. In order to obtain a correct validation, both the algorithm
argument and the block-inference
flag must have the same values as the original run. The original dataset file must also be available. The results can be compared by a line by line comparison. This can easily be achieved by using a bash script like:
if [ "‘sort <original file>‘" == "‘sort <replica file>‘" ]; then echo ’Valid results’;
fi
To process the files with results from run_algorithm
, the following command is available:
cascade-study process_results <results file>* [-o <output_file>]
This will process all input files into a file with the average and variance for each field. To extract the name of the algorithm and key length, it expects the files to be named: <algorithm>
-<key length>
-[irrelevant].
To create charts from the data in a file like the process results output file, the following command is available:
cascade-study create_chart <input file> <x axis name> <y axis name> [options]