Closed Un1Lee closed 10 months ago
Hi, a nice question about the code! The code is based on the idea that normalizing the coefficients before training the network to predict them may be beneficial. While not normalizing may also work well, normalizing provides some assurance that the results will not be poor. To calculate the mean and standard deviation for normalization, the $ws$ are sampled 1000 times and the mean and standard deviation of the samples are computed.
Thank you for your answer! However, I found that in decouple_by_invert.py 62, it does the ones_like
.
self.ws_stdv = torch.ones_like(torch.from_numpy(np.load('pretrained/ws_std.npy')))
I think ws_stdv is not trainable, and when test your model, it should load ws_std.npy instead ones_like. Maybe it's a bug?
When I delete the noes_like, I think I get better result with your model.
Thank you for your answer! However, I found that in decouple_by_invert.py 62, it does the
ones_like
.self.ws_stdv = torch.ones_like(torch.from_numpy(np.load('pretrained/ws_std.npy')))
I think ws_stdv is not trainable, and when test your model, it should load ws_std.npy instead ones_like. Maybe it's a bug? When I delete the noes_like, I think I get better result with your model.
@Un1Lee @theEricMa Same here. why torch.ones_like
?
Thanks for pointing out the bug! I'll work on providing a fix.
Thank you for your work! I'm confused when I read
why do you do like this:
ws_trans = ws_trans * self.ws_stdv.to(w_opt)
why ws_trans do not directly come from network but continue to multiply with ws_stdv? Anyway, I know that the ws_stdv means what, but how to get it? Does it come from eg3d office?