CVLFace is a powerful and versatile toolkit for achieving state-of-the-art performance in face recognition. Whether you're a researcher exploring new algorithms or a developer building real-world applications, CVLFace empowers you with:
CVLFace is created by MSU CVLab to foster innovation, collaboration, and accessibility in the field of face recognition. It is built upon a foundation of cutting-edge research and technology, offering a user-friendly experience for both beginners and seasoned practitioners.
Visit our Documentation for more details: Documentation Website
Supported Papers: CVLFace seamlessly integrates with widely-cited face recognition algorithms, such as such as ArcFace, CosFace, AdaFace, and KP-RPE.
Arch | Loss | Dataset | Link | AVG | LFW | CPFLW | CFPFP | CALFW | AGEDB | IJBB@0.01 | IJBC@0.01 | TinyFace R1 | TinyFace R5 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ViT KPRPE [1] | AdaFace [2] | WebFace12M | To be released on Aug. | 93.13 | 99.82 | 95.65 | 99.30 | 95.93 | 98.10 | 96.55 | 97.82 | 76.10 | 78.92 |
ViT KPRPE [1] | AdaFace [2] | WebFace4M | π€ | 92.76 | 99.83 | 95.40 | 99.01 | 96.00 | 97.67 | 95.56 | 97.13 | 75.75 | 78.49 |
ViT [1] | AdaFace [2] | WebFace4M | π€ | 92.48 | 99.80 | 94.97 | 98.94 | 96.03 | 97.48 | 95.60 | 97.14 | 74.79 | 77.58 |
IR101 [3] | AdaFace [2] | WebFace12M | π€ | 92.13 | 99.82 | 94.57 | 99.24 | 96.12 | 98.00 | 96.46 | 97.72 | 72.42 | 74.81 |
IR101 [3] | AdaFace [2] | WebFace4M | π€ | 91.98 | 99.83 | 94.63 | 99.27 | 96.05 | 97.90 | 96.10 | 97.46 | 72.13 | 74.49 |
IR101 [3] | Arc-Face [3] | WebFace4M | π€ | 91.76 | 99.78 | 94.35 | 99.21 | 96.00 | 97.95 | 95.83 | 97.30 | 71.03 | 74.41 |
IR101 [3] | AdaFace [2] | MS1MV3 | π€ | 90.99 | 99.83 | 93.92 | 99.09 | 96.02 | 98.18 | 95.82 | 97.05 | 67.95 | 71.03 |
IR101 [3] | AdaFace [2] | MS1MV2 | π€ | 90.90 | 99.80 | 93.53 | 98.61 | 96.12 | 98.05 | 95.59 | 96.81 | 68.11 | 71.49 |
IR50 [3] | AdaFace [2] | MS1MV2 | π€ | 89.96 | 99.85 | 92.85 | 98.09 | 96.07 | 97.85 | 94.86 | 96.20 | 64.99 | 68.88 |
IR50 [3] | AdaFace [2] | WebFace4M | π€ | 91.48 | 99.78 | 94.17 | 98.99 | 95.98 | 97.78 | 95.49 | 97.01 | 70.20 | 73.93 |
IR50 [3] | AdaFace [2] | CASIA | π€ | 77.43 | 99.37 | 90.02 | 97.04 | 93.43 | 94.40 | 46.04 | 52.97 | 59.44 | 64.14 |
IR18 [3] | AdaFace [2] | WebFace4M | π€ | 89.55 | 99.58 | 92.28 | 97.80 | 95.52 | 96.48 | 92.75 | 94.79 | 66.07 | 70.71 |
IR18 [3] | AdaFace [2] | VGG2 | π€ | 88.12 | 99.53 | 91.73 | 97.64 | 93.90 | 94.07 | 90.07 | 92.40 | 64.62 | 69.15 |
IR18 [3] | AdaFace [2] | CASIA | π€ | 72.40 | 99.22 | 87.00 | 94.93 | 92.65 | 92.68 | 30.36 | 37.10 | 56.20 | 61.43 |
CVLFace includes several practical apps to demonstrate and utilize the capabilities of the toolkit in real-world scenarios. Currently, there are two main applications:
The Face Alignment App processes facial images to align them to a canonical position suitable for face recognition. This app automatically resizes the images to 112x112 pixels, optimizing them for consistent input to face recognition models.
![]() |
![]() |
![]() |
The Face Verification App verifies the identity of a person by comparing their facial image with a reference image. This app uses a pre-trained face recognition model to calculate the similarity between the two images and determine if they belong to the same person.
![]() |
conda create -n cvlface python=3.10 pytorch=2.1.2 torchvision=0.16.2 torchaudio=2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia
git clone https://github.com/mk-minchul/CVLface.git
cd CVLface
pip install -r requirements.txt
CVLface/cvlface/.env
Modify the following environment variables in the .env
file:
cd cvlface
vim .env # edit the following environ variables
"""(content to add to .env)
DATA_ROOT="YOUR_PATH_TO_DATA"
HF_TOKEN="YOUR_HUGGINGFACE_HF_TOKEN"
WANDB_TOKEN="YOUR_WANDB_TOKEN"
"""
Download Evaluation Toolkit for evaluating models during training or after training. Take a look at README_EVAL_TOOLKIT.md for details.
Take a look at README_MODELS.md for details. We offer more than 10 pre-trained models.
Take a look at README_TRAIN_DATA.md for details. Documented datasets include
Quickly test if the installation is successful by running the following command: (No training dataset needed for the mock run, only needs the eval toolkit)
# mock run to test the installation and evaluation toolkit
cd cvlface/research/recognition/code/run_v1
python train.py trainers.prefix=test_run \
trainers.num_gpu=1 \
trainers.batch_size=32 \
trainers.limit_num_batch=128 \
trainers.gradient_acc=1 \
trainers.num_workers=8 \
trainers.precision='32-true' \
trainers.float32_matmul_precision='high' \
dataset=configs/synthetic.yaml \
data_augs=configs/basic_v1.yaml \
models=iresnet/configs/v1_ir50.yaml \
pipelines=configs/train_model_cls.yaml \
evaluations=configs/base.yaml \
classifiers=configs/fc.yaml \
optims=configs/step_sgd.yaml \
losses=configs/cosface.yaml
# mock run to test the installation and evaluation toolkit
cd cvlface/research/recognition/code/run_v1
LIGHTING_TESTING=1 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6 lightning run model \
--strategy=ddp \
--devices=7 \
--precision="32-true" \
train.py trainers.prefix=ir101_WF4M_adaface \
trainers.num_gpu=7 \
trainers.batch_size=256 \
trainers.gradient_acc=1 \
trainers.num_workers=8 \
trainers.precision='32-true' \
trainers.float32_matmul_precision='high' \
dataset=configs/webface4m.yaml \
data_augs=configs/basic_v1.yaml \
models=iresnet/configs/v1_ir101.yaml \
pipelines=configs/train_model_cls.yaml \
evaluations=configs/full.yaml \
classifiers=configs/fc.yaml \
optims=configs/step_sgd.yaml \
losses=configs/adaface.yaml \
trainers.skip_final_eval=False
More examples can be found at cvlface/research/recognition/code/run_v1/scripts/examples
We encourage contributions to CVLFace, including:
We would like to express our gratitude for their contributions and support:
Join us in pushing the boundaries of face recognition technology with CVLFace!