Closed tsuJohc closed 1 year ago
Hi! We haven't tried on 360 indoor scenes but your case is supposed to work. Did you use manually-annotated masks? I suggest you try the following and see if you find anything useful:
lambda_trans_depth_smoothness
to force the mirror to have the same depth as walllambda_beta_mask
to force the mirror to have beta=1
Thanks for the prompt reply! I have used manually-annotated masks for each frame but it does not help. I will try the second and third advice.
Besides, in my 360 degree dataset, there are some (~50%) frames that can not see mirror. I do not know whether it affects the learning of reflected branch?
I think it's fine if some of the frames can not see the mirror. You could verify this by only taking the frames with the mirror for training.
Hi, Thanks for publishing the amazing work. I run the nerfren in my custom dataset where the camera scans the room 360 degrees around a circle. In the room, there is a big mirror on the left wall.
I use the spheric poses option (no ndc) and find the reflected branch can not learn the color of mirror even provided with mirror masks. The hyperparameters are the same as
train_mirror.sh
.At the early stage of training, I find the transmitted branch will learn the whole scene ( including the color of mirror) very soon while the reflected branch outputs all black color. At the same time, beta value has a coarse shape of mirror mask.