Open frankkim1108 opened 2 months ago
Hello, @liuff19
Can you tell more detail about the process of getting
As the paper:
Does the equation just get only one for training equation (13) for view i? With only one sample, it is able to optimaze equation (13)?
Or may be the equation return a set of for view i, as:
Then we can use the set of data to train equation (13) to get and .
Hello, @liuff19
Recently, I found your project very interesting and started reading your paper. However, I have some questions on some equations in the paper.
For equations (12) $$\ L{\text{diffusion}} = \mathbb{E}{x \sim p, \epsilon \sim \mathcal{N}(0, I), c{\text{view}}, c{\text{struc}}, t} \left[ |\epsilon - \epsilon_{\theta} (xt, t, c{\text{view}}, c_{\text{struc}})|^2_2 \right] $$
in the paper it says that $x_t$ is the noise latent from the ground-truth views of the training data. Which ground-truth view are you referencing from?
2. In section 5.4 it says that 'For the generated frames $\{Ii\}{i=1}^{K{\prime}}$ we denote $\hat{C}_i$ and $C_i$ the per-pixel color value for generated and ground-truth view $i$.'
What do you mean by ground-truth view $C_i$?
It also appears in equation (13)
$$ L_{I_i} = - \log \left( \frac{1}{\sqrt{2 \pi \sigma_i^2}} \exp \left( -\frac{|C{\prime}_i - C_i|^2}{2 \sigma_i^2} \right) \right) $$
3. For equation (14) $$\ L{\text{conf}} = \sum{i=1}^{K{\prime}} Ci \left( \lambda{\text{rgb}} L_1(\hat{I_i}, Ii) + \lambda{\text{ssim}} L{\text{ssim}}(\hat{I_i}, Ii) + \lambda{\text{lpips}} L{\text{lpips}}(\hat{I_i}, I_i) \right) $$
in this equation it seems like the loss is calculated between the 32 generated frames $\hat{I_i}$ with its GT frames $I_i$. Which GT frames are you comparing with? Is it the input sparse view? or is it from the train dataset video?
Thank you in advance for your time to reply to this issue.
Best regards Frank