anuragranj / flowattack

Attacking Optical Flow (ICCV 2019)
https://flowattack.is.tue.mpg.de
Other
59 stars 16 forks source link

Requirements and training #3

Open andraghetti opened 4 years ago

andraghetti commented 4 years ago

Hi,

Can you please provide requirements with the versions? I tried to run the main for training, but the main.py script is broken. I had to change a few things in order to make it work. Yet, after the changes, I couldn't make the patch affect FlowNetC predictions. Could it be a version problem? Seems like the patch is not 'learning'.

Requirements

cudatoolkit               10.1.243             h6bb024c_0
libpng                    1.6.37               hbc83047_0
matplotlib                3.2.2                    pypi_0    pypi
numpy                     1.19.0                   pypi_0    pypi
numpy-base                1.18.5           py38hde5b4d6_0
nvidia-ml-py3             7.352.0                  pypi_0    pypi
opencv-python             4.2.0.34                 pypi_0    pypi
pillow                    7.2.0                    pypi_0    pypi
pip                       20.1.1                   py38_1
pypng                     0.0.20                   pypi_0    pypi
python                    3.8.3                hcff3b4d_2
python-utils              2.4.0                    pypi_0    pypi
pytorch                   1.5.1           py3.8_cuda10.1.243_cudnn7.6.3_0    pytorch
scikit-image              0.17.2                   pypi_0    pypi
scipy                     1.5.0                    pypi_0    pypi
six                       1.15.0                     py_0
spatial-correlation-sampler 0.2.1                    pypi_0    pypi
tensorboardx              2.0                      pypi_0    pypi
torch                     1.5.1                    pypi_0    pypi
torchvision               0.6.1                    pypi_0    pypi

Changes to make it work

  1. this made all the RGB blurry, I removed the 0.5. Can you explain why you did that?
  2. this is a return in the middle of a function definition. I moved it at the bottom.
  3. I had to change the True at the end of this line, into center=True. I hope that was the intended behavior.

Results

On FlowNetC, patch 153x153 (patch_size=0.4)

Screenshot 2020-08-09 at 15 31 04

Some considerations

FlowNetC looks bothered by the patch, but the results are not heavily affected as stated on the paper. Also, seems like in the evaluation the flow is also considered inside the patch, which is not fair. A fair comparison should be 'learned patch' vs 'random patch' not 'learned patch' vs 'original flow'.

Thank you for your time and efforts! It's a very nice idea and I'm eager to see it working!

anuragranj commented 4 years ago

Hi, Thanks for pointing out the issues. I am sorry, I don't have the version info, but your graph looks good. The experiment is has a high variance, so we ran it with many seeds and took the best (highest error) for each case. Your graph looks pretty good with the maximum attack epe of 35 or so, but it is still very low compared to our runs.

  1. It was a hack to integrate the different flow networks, it normalizes images between [-1,1] to [0,1]. I think PWC-Net has that constraint.
  2. Oops. Thanks, it must have happened with some drag and drop.
  3. That positions the patch always in the centre. It should be set to False so that patch becomes invariant to location.

For reporting, we used GT=zero flow for inside the patch, since the patch doesn't move. (cc - @JJanai)

pierluigidovesi commented 4 years ago

Hi @anuragranj, Regarding the graph, I'm not quite understanding how it could look good. I would expect the evaluation adv_epe to grow over the epochs (up to ~80), while from the experiments it looks like it fluctuates around the same value (~26), therefore my conclusion is that the patch optimization fails. Am I missing something? Could the problem be related with the Pytorch version? Is it normal that certain seeds fail to converge?

anuragranj commented 4 years ago

The relative error from 14 to 35 is about 250%. The adv_epe does not grow after approximately 30 epochs. We ran with many seeds, and terminated the experiment at epoch 30. The patch optimization is not very robust. Probably because its a small patch over a whole image. So there is very little information for the gradient descent. The learning rate is therefore kept high around 1e3, 1e4 to get an adversarial patch. The high learning rate is another reason why the optimization is not robust. But, it doesn't work with lower learning rates in my experience. It is fairly normal for most seeds to fail. Usual success rate is roughly about 1 in 5. But 30 epochs take about 2-3 hours, so it is fairly fast to run experiments with multiple seeds.

pierluigidovesi commented 4 years ago

Thanks for the details! What worried me about the +250% EPE is that we get it straight after the first epoch, so my initial intuition attributed the huge error growth to the low robustness of Flownet in the presence of an "unseen" object in the image rather than an adversarial patch. I wonder what could be the effect of a totally random patch, i.e. disentangling the damage caused by the adversarial training by the accuracy drop due to normal domain shift. Have you ever tried to train or test Flownet with random patches to assess it? Thanks for the clarification regarding the learning rate, if it's so high I understand the high failure rate. Have you tried to play with the patch initialization process to improve the convergence? (I don't know if common DL weight initialization procedure can be applied for adversarial patches for example...) Last question: have you tried other adversarial patch generation procedures? I'm thinking about a "GAN" approach for example.

anuragranj commented 4 years ago

The first question is answered in the Analysis section of the paper, where we see some difference with random patches. But we did not train flownet with it. I haven't tried playing with patch initialisation or GAN based patch generation, I think it would be a great idea to try that. Could you also send me a pull request, if possible, of the issues you resolved? Thanks a lot.

andraghetti commented 4 years ago

Hi, we created a sweep changing only the seeds (Numpy and PyTorch), but we never managed to get more than 45 adv_epe (reached in an intermediate step). Seems like the patch saturates in just one epoch.

Let us know if you are available for having a chat about it.