CRLite uses a Bloom filter cascade and whole-ecosystem analysis of the Web PKI to push the entire web’s TLS revocation information to Firefox clients, replacing OCSP for most browser TLS connections, speeding up connection time while continuing to support PKI revocations. The system was originally proposed at IEEE S&P 2017.
For details about CRLite, Mozilla Security Engineering has a blog post series, and this repository has a FAQ.
There are also useful end-user tools for querying CRLite: moz_crlite_query, to query the current CRLite filter for revocations, and a diagnostic tool crlite_status to monitor filter generation metrics.
CRLite is designed to run in Kubernetes, with the following services:
containers/crlite-fetch
, a constantly-running task that downloads from Certificate Transparency logs into Redis and Google Firestorecontainers/crlite-generate
, a periodic (cron) job that produces a CRLite filter from the data in Redis and uploads the artifacts into Google Cloud Storagecontainers/crlite-publish
, a periodic (cron) job that publishes the results of a crlite-generate
run to a Kinto instance.containers/crlite-signoff
, a periodic (cron) job that verifies and approves data crlite-publish
placed in a Kinto instance.There are scripts in containers/
to build Docker images both using Docker, seebuild-local.sh
. There are also builds at Docker Hub in the mozilla/crlite
project.
Storage consists of these parts:
containers/crl-storage-claim.yaml
.This tooling monitors Certificate Transparency logs and, upon secheduled execution, crlite-generate
produces a new filter and uploads it to Cloud Storage.
The process for producing a CRLite filter, is run by system/crlite-fullrun
, which is described in block form in this diagram:
The output Bloom filter cascade is built by the Python mozilla/filter-cascade
tool and then read in Firefox by the Rust mozilla/rust-cascade
package.
For complete details of the filter construction see Section III.B of the CRLite paper.
The keys used into the CRLite data structure consist of the SHA256 digest of the issuer's Subject Public Key Information
field in DER-encoded form, followed by the the certificate's serial number, unmodified, in DER-encoded form.
It's possible to run the tools locally, though you will need local instances of Redis and Firestore. First, install the tools and their dependencies
go install -u github.com/mozilla/crlite/go/cmd/ct-fetch
go install -u github.com/mozilla/crlite/go/cmd/aggregate-crls
go install -u github.com/mozilla/crlite/go/cmd/aggregate-known
pipenv install
You can configure via environment variables, or via a config file. Environment variables are specified in the /containers/*.properties.example
files. To use a configuration file, ~/.ct-fetch.ini
(or any file selected on the CLI using -config
), construct it as so:
certPath = /ct
numThreads = 16
cacheSize = 128
You'll want to set a collection of configuration parameters:
runForever [true/false]
logExpiredEntries [true/false]
numThreads 16
cacheSize [number of cache entries. An individual entry contains an issuer-day's worth of serial numbers, which could be as much as 64 MB of RAM, but is generally closer to 1 MB.]
outputRefreshMs [milliseconds]
The log list is all the logs you want to sync, comma separated, as URLs:
logList = https://ct.googleapis.com/icarus, https://oak.ct.letsencrypt.org/2021/
To get all current ones from certificate-transparency.org:
echo "logList = $(setup/list_all_active_ct_logs)" >> ~/.ct-fetch.ini
If running forever, set the delay on polling for new updates, per log. This will have some jitter added:
pollingDelay
[minutes]If not running forever, you can give limits or slice up CT log data:
limit
[uint]offset
[uint]You'll also need to configure credentials used for Google Cloud Storage:
GOOGLE_APPLICATION_CREDENTIALS
[base64-encoded string of the service credentials JSON]If you need to proxy the connection, perhaps via SSH, set the HTTPS_PROXY
to something like socks5://localhost:32547/"
as well.
containers/build-local.sh
produces the Docker containers locally.
test-via-docker.sh
executes a complete "run", syncing with CT and producing a filter. It's configured using a series of environment variables.
Note that since all data is stored in Redis, a robust backup for the Redis information is warranted to avoid expensive resynchronization.
Redis can be provided in a variety of ways, easiest is probably the Redis docker distribution. For whatever reason, I have the best luck remapping ports to make it run on 6379:
docker run -p 6379:7000 redis:4 --port 7000
To construct a container, see containers/README.md
.
The crlite-fetch container runs forever, fetching CT updates:
docker run --rm -it \
-e "FIRESTORE_EMULATOR_HOST=my_ip_address:8403" \
-e "outputRefreshMs=1000" \
crlite:staging-fetch
The crlite-generate container constructs a new filter. To use local disk, set the certPath
to /ctdata
and mount that volume in Docker. You should also mount the volume /processing
to get the output files:
docker run --rm -it \
-e "certPath=/ctdata" \
-e "outputRefreshMs=1000" \
--mount type=bind,src=/tmp/ctlite_data,dst=/ctdata \
--mount type=bind,src=/tmp/crlite_results,dst=/processing \
crlite:staging-generate
See the test-via-docker.sh
for an example.
To run in a remote container, such as a Kubernetes pod, you'll need to make sure to set all the environment variables properly, and the container should otherwise work. See containers/crlite-config.properties.example
for an example of the Kubernetes environment that can be imported using kubectl create configmap
, see the containers
README.md for details.
ct-fetch
Downloads all CT entries' certificates to a Firestore instance and collects their metadata.
aggregate-crls
Obtains all CRLs defined in all CT entries' certificates, verifies them, and collates their results
into *issuer SKI base64*.revoked
files.
aggregate-known
Collates all CT entries' unexpired certificates into *issuer SKI base64*.known
files.
filter_cascade
and the filter-cascade
project.rust-cascade
.