autonomousvision / giraffe

This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"
https://m-niemeyer.github.io/project-pages/giraffe/index.html
MIT License
1.23k stars 160 forks source link

About the model structure #20

Closed diaodeyi closed 3 years ago

diaodeyi commented 3 years ago

Hi , thanks for your great work , i have two questions about the model:

  1. Why we abandoned the patch input and discriminator in the GARF?
  2. How did this mode solve the data demands of the dataset like the scene bounds .etc?(GARF use the LLFF or COLMAP)
m-niemeyer commented 3 years ago

Hi @diaodeyi , thanks for your interest in the project.

  1. In GRAF, we used a patch discriminator to avoid excessive memory requirement - applying a adversarial loss on the full image is likely to give better results (works like CAMPARI and pi-GAN also show this). In this work, we combine volume and neural rendering so that rendering the full-resolution image is not that expensive anymore, and we can train with an adversarial loss on the full image.
  2. I have to say I don't understand your question. GRAF also trains in an adversarial manner on large image collections - we did not apply GRAF to single scenes.
diaodeyi commented 3 years ago

Thanks for your reply. As for the second question , i mean how to get the config's parameter for different dataset?

diaodeyi commented 3 years ago

the bounding_box_generator_kwargs and the generator_kwargs