Open andraghetti opened 4 years ago
Hi, Thanks for pointing out the issues. I am sorry, I don't have the version info, but your graph looks good. The experiment is has a high variance, so we ran it with many seeds and took the best (highest error) for each case. Your graph looks pretty good with the maximum attack epe of 35 or so, but it is still very low compared to our runs.
For reporting, we used GT=zero flow for inside the patch, since the patch doesn't move. (cc - @JJanai)
Hi @anuragranj, Regarding the graph, I'm not quite understanding how it could look good. I would expect the evaluation adv_epe to grow over the epochs (up to ~80), while from the experiments it looks like it fluctuates around the same value (~26), therefore my conclusion is that the patch optimization fails. Am I missing something? Could the problem be related with the Pytorch version? Is it normal that certain seeds fail to converge?
The relative error from 14 to 35 is about 250%. The adv_epe does not grow after approximately 30 epochs. We ran with many seeds, and terminated the experiment at epoch 30. The patch optimization is not very robust. Probably because its a small patch over a whole image. So there is very little information for the gradient descent. The learning rate is therefore kept high around 1e3, 1e4 to get an adversarial patch. The high learning rate is another reason why the optimization is not robust. But, it doesn't work with lower learning rates in my experience. It is fairly normal for most seeds to fail. Usual success rate is roughly about 1 in 5. But 30 epochs take about 2-3 hours, so it is fairly fast to run experiments with multiple seeds.
Thanks for the details! What worried me about the +250% EPE is that we get it straight after the first epoch, so my initial intuition attributed the huge error growth to the low robustness of Flownet in the presence of an "unseen" object in the image rather than an adversarial patch. I wonder what could be the effect of a totally random patch, i.e. disentangling the damage caused by the adversarial training by the accuracy drop due to normal domain shift. Have you ever tried to train or test Flownet with random patches to assess it? Thanks for the clarification regarding the learning rate, if it's so high I understand the high failure rate. Have you tried to play with the patch initialization process to improve the convergence? (I don't know if common DL weight initialization procedure can be applied for adversarial patches for example...) Last question: have you tried other adversarial patch generation procedures? I'm thinking about a "GAN" approach for example.
The first question is answered in the Analysis section of the paper, where we see some difference with random patches. But we did not train flownet with it. I haven't tried playing with patch initialisation or GAN based patch generation, I think it would be a great idea to try that. Could you also send me a pull request, if possible, of the issues you resolved? Thanks a lot.
Hi, we created a sweep changing only the seeds (Numpy and PyTorch), but we never managed to get more than 45 adv_epe (reached in an intermediate step). Seems like the patch saturates in just one epoch.
Let us know if you are available for having a chat about it.
Hi,
Can you please provide requirements with the versions? I tried to run the main for training, but the main.py script is broken. I had to change a few things in order to make it work. Yet, after the changes, I couldn't make the patch affect FlowNetC predictions. Could it be a version problem? Seems like the patch is not 'learning'.
Requirements
Changes to make it work
center=True
. I hope that was the intended behavior.Results
On FlowNetC, patch 153x153 (patch_size=0.4)
Some considerations
FlowNetC looks bothered by the patch, but the results are not heavily affected as stated on the paper. Also, seems like in the evaluation the flow is also considered inside the patch, which is not fair. A fair comparison should be 'learned patch' vs 'random patch' not 'learned patch' vs 'original flow'.
Thank you for your time and efforts! It's a very nice idea and I'm eager to see it working!