davide97l / rl-policies-attacks-defenses

Adversarial attacks on Deep Reinforcement Learning (RL)
MIT License
76 stars 12 forks source link

puzzle about adversarial attacks #27

Open GongYanfu opened 8 months ago

GongYanfu commented 8 months ago

I found there are so many adversarial attacks in your code but in your paper, you only used FGSM and PGD Did you use other attcks to do experiment? if you did, you didn't talk about them in you paper and why?