Closed ch-ho00 closed 1 year ago
Thank you for your suggestion! I think what you proposed makes sense, which could add another source of perturbations during adversarial training, i.e., the projection step. This could further degrade the quality of rendered images as the color blending steps also require the projected RGB. We didn't add this since color blending is not used by all generalizable NeRFs and thus we want to study the vulnerability caused by adversarial featuremaps only. You can definitely incorporate the modification you proposed to derive a holistic attack.
Well understood! Thanks once again for the great work
First of all, thank you for open sourcing the interesting work! While I was trying out some experiments, I had a question on line https://github.com/GATECH-EIC/NeRFool/blob/main/train.py#L158.
I think this line should be replaced with
these two lines
the reason is since the rendering is an image-based rendering (at least for IBRNet) where queried xyz radiance value is a combination of the source image projected pixel values? Since the source images are perturbed, the rendering should be based on the perturbed image itself. Let me know if this makes sense and what do you think?