I have a question about training the texture network for multi-view. I have try to train the multi-view shape network, it works. But I am wondering that as the PTR process have added the environment light to the rendering training 2D images, if we use the rendering images train the texture network directly, does it will influence the final results? As for each sample points, it will give different RGB value from different angel views, and if we use the mean of the RGB value, it should be not work, right? So how to handle this problem.
Hi,
I have a question about training the texture network for multi-view. I have try to train the multi-view shape network, it works. But I am wondering that as the PTR process have added the environment light to the rendering training 2D images, if we use the rendering images train the texture network directly, does it will influence the final results? As for each sample points, it will give different RGB value from different angel views, and if we use the mean of the RGB value, it should be not work, right? So how to handle this problem.
Thanks for your answering.