Kunhao-Liu / StyleRF

[CVPR 2023] StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields
https://kunhao-liu.github.io/StyleRF/
143 stars 11 forks source link

Question about sampling method. #19

Closed ShiyeLi closed 7 months ago

ShiyeLi commented 1 year ago

Hi! thanks for sharing such amazing work! After reading your code, I found that when training the feature encoder part, you used random sampling of the light, and when training the decoder, you sampled the whole image. What is the consideration for difference between them? Does the sampling method have a significant impact on the training of the feature encoder? Looking forward reply!

ShiyeLi commented 1 year ago

Another question is, what is the consideration for alternating {feature loss} and {rgb_loss+ perception loss} to train the network? Intuitively, there is no problem training the feature encoder and the decoder separately, since the encoder's target is derived from the upsampling of the vggencoder feature. The above questions are all from the train_feature stage. I would appreciate it if you are willing to reply!

Kunhao-Liu commented 7 months ago

Hi, for your first question, because the feature extraction of VGG needs a complete image patch, VGG cannot extract meaningful features from random rays.

For your second question, I implemented this just for memory and computation saving. You can use them together in one iteration.