2D face recognition has been proven insecure for physical adversarial attacks. However, few studies have investigated the possibility of attacking real-world 3D face recognition systems. 3D-printed attacks recently proposed cannot generate adversarial points in the air. In this paper, we attack 3D face recognition systems through elaborate optical noises. We took structured light 3D scanners as our attack target. End-to-end attack algorithms are designed to generate adversarial illumination for 3D faces through the inherent or an additional projector to produce adversarial points at arbitrary positions. Nevertheless, face reflectance is a complex procedure because the skin is translucent. To involve this projection-and-capture procedure in optimization loops, we model it by Lambertian rendering model and use SfSNet to estimate the albedo. Moreover, to improve the resistance to distance and angle changes while maintaining the perturbation unnoticeable, a 3D transform invariant loss and two kinds of sensitivity maps are introduced. Experiments are conducted in both simulated and physical worlds. We successfully attacked point-cloud-based and depth-image-based 3D face recognition algorithms while needing fewer perturbations than previous state-of-the-art physical-world 3D adversarial attacks..
Official Implementation of our StructuredLightAttack paper. We extend the C&W attack which includes the 3D structured light reconstruction process. We only include the attack code here without the 3D structured light rebuild code. For 3D structured light rebuild code, you can refer to https://github.com/phreax/structured_light.git.
2023.3.10
: Initial code release
git clone https://github.com/PolyLiYJ/SLAttack.git
cd SLAttack
conda install --yes --file requirements.txt
To excecute L2Loss attack on Bosphorus dataset and PointNet and save the adversarial point cloud.
L2Loss is optimized on the phase map. Please see our paper.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss --binary_step 1
For comparison, you can also excecute L2Loss_pt attack on Bosphorus dataset and PointNet and save the adversarial point cloud.
L2Loss_pt loss is optimized on the point cloud.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss_pt
To excecute L2 attack on Bosphorus dataset and PointNet and save the adversarial point cloud.
L2 loss is optimized on the phase map. Please see our paper.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss \
--whether_target --binary_step 1
For comparison, you can also excecute L2Loss_pt on Bosphorus dataset and PointNet and save the adversarial point cloud.
L2Loss_pt is optimized on the point cloud.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss_pt \
--whether_target
python Train_classifier.py --model PointNet
python Evaluate.py --whether_1d --attack_lr 0.01 --num_iter 100 --early_break --binary_step 5 --dist_function L2Loss
python Evaluate.py --whether_target --whether_1d --attack_lr 0.001 --num_iter 100 --binary_step 5 --dist_function L2Loss
python Evaluate.py --whether_1d --attack_lr 0.001 --num_iter 1000 --binary_step 10 --dist_function L2Loss --model PointNet++Msg
python get_adv_illumination --normal_pc test_face_data/person1.txt --adv_pc test_face_data/adv_person1_untargeted_L1Loss_5.txt --outfolder test_face_data/person1/adversarial_fringe
If you use this code for your research, please cite our paper Physical-World Optical Adversarial Attacks on 3D Face Recognition:
@inproceedings{
yanjieli2023physicalworld,
title={Physical-World Optical Adversarial Attacks on 3D Face Recognition},
author={Yanjie Li, Yiquan Li, Xuelong Dai, Songtao Guo, Bin Xiao},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=vGZl0N9s0s}
}