Closed linhaojia13 closed 4 years ago
This repo is the exact code I used to run experiments for the paper. Adversarial training is done by adding an additional loss with fast gradient L2 to the usual training loss. For PointNet, you can see the adversarial training code here: https://github.com/Daniel-Liu-c0deb0t/3D-Neural-Network-Adversarial-Attacks/blob/master/src/pointnet/train.py#L114. For PointNet++, it is similar.
It's all my fault that I neglected the modified train.py. Thank you for your prompt reply! @Daniel-Liu-c0deb0t
Hey, if you extend the code and do other experiments, then keep me in the loop. I am interested in what you do.
Sure, it’s my pleasure.
Hi @Daniel-Liu-c0deb0t , thank you for your good work! It seems that this repo doesn't implement the adversarial training as depicted in the paper. Do I have a misunderstanding?