jkulhanek / wild-gaussians

[NeurIPS'24] WildGaussians: 3D Gaussian Splatting In the Wild
https://wild-gaussians.github.io
Other
329 stars 22 forks source link

A question #10

Closed Dumeowmeow closed 2 months ago

Dumeowmeow commented 3 months ago

Thank you for your great work! I am a bit confused about where is the code part corresponding to "0-th order SH" in the paper(Section3.2), don't gaussians["features"] represent all spherical harmonic coefficients?

jkulhanek commented 3 months ago

Hi, thanks for taking interest! The corresponding line of code is here: https://github.com/jkulhanek/wild-gaussians/blob/47c24e823c00ec22d4b7383cc31d90de7eaae1f8/wildgaussians/method.py#L865 where the first three dims (corresponding to 0-th order SH are taken from features. Does it answer the question?

Dumeowmeow commented 2 months ago

Hi, thanks for taking interest! The corresponding line of code is here:

https://github.com/jkulhanek/wild-gaussians/blob/47c24e823c00ec22d4b7383cc31d90de7eaae1f8/wildgaussians/method.py#L865

where the first three dims (corresponding to 0-th order SH are taken from features. Does it answer the question?

Thank you for your reply!It solves my problem.Another question I have is, if I want to apply the same lighting conditions on one image to another, is it possible to do so using this method?

jkulhanek commented 2 months ago

Sure! You get the app embedding either by 1) using method.get_train_embedding(...) which will give you the embedding of a training image, or by using 2) method.optimize_embedding(...) which will optimize the appearance on the image you pass to it and return the appearance embedding.

Given you have an appearance embedding (as a numpy array), you can call method.render(camera, options={'embedding': embedding}) and it will render the scene as if it had the appearance of the appearance embedding (obtained by either 1 or 2).

Dumeowmeow commented 2 months ago

Sure! You get the app embedding either by 1) using method.get_train_embedding(...) which will give you the embedding of a training image, or by using 2) method.optimize_embedding(...) which will optimize the appearance on the image you pass to it and return the appearance embedding.

Given you have an appearance embedding (as a numpy array), you can call method.render(camera, options={'embedding': embedding}) and it will render the scene as if it had the appearance of the appearance embedding (obtained by either 1 or 2).

Thank you for your quickly reply!I will try it.Thank you very much!

Dumeowmeow commented 2 months ago

I'm sorry that perhaps my situation is different from what you said. I have two images of different scenes, and I will reconstruct the Gaussian for them separately. That is to say, after training, I will get two MLPs, the embeddings for each image, and the embeddings for two groups of Gaussian. Now, if I want to apply the lighting conditions of one image to another, how should I do it? Do I need to go through the training process?

jkulhanek commented 2 months ago

Are they completely separate scenes or are they registered in the same coordinate frame?

Dumeowmeow commented 2 months ago

Are they completely separate scenes or are they registered in the same coordinate frame?

In fact, their backgrounds are the same, but one foreground is real and the other foreground is manually pasted on, so there is no real lighting condition. I tried to use the method in the article to make the fake images more natural. They are in the same coordinate system.

jkulhanek commented 2 months ago

Good, in that case you can use the optimize_embedding method to get the embedding vector.

Dumeowmeow commented 2 months ago

Good, in that case you can use the optimize_embedding method to get the embedding vector.

Thank you for your patient reply~I looked at the code and found that when optimize_embedding was performed, gaussians of the original scene was used. Then how can Gaussians learn the image's appearance that it has never seen before, don't they belong to two scenes and are different gaussians?

jkulhanek commented 2 months ago

You would call optimize_embedding on the model of the scene you want to use it for. In that case, you would call it on the scene TO which you want to transfer the appearance of the source image, but you would give it the source image belonging to the source scene. Btw, why did you even optimize on the source scene, you should be fine with just one set of Gaussians, right? Or am I missing something?

Dumeowmeow commented 2 months ago

你可以在你想要使用的场景的模型上调用优化嵌入。在那种情况下,你可以在你想传输源图像外观的场景上调用它,但你会给它属于源场景的源图像。顺便提一下,你为什么要在源场景上进行优化,你应该只用一套高斯函数就行了,对吗?还是我漏掉了什么?

Perhaps my understanding still has some issues. Are you saying that if I already have a trained church scene and I want to apply lighting from a car image to this scene, I only need to fix Gaussian embeddings and MLP in the church scene, train only appearance embeddings, and perform a loss with the gt image of the car image,right?

jkulhanek commented 2 months ago

If both sets of images are in the same coord system, you can train the church scene, take an image of the car, optimize it's embedding, and apply it to images of the church. Does this answer your question?

Dumeowmeow commented 2 months ago

If both sets of images are in the same coord system, you can train the church scene, take an image of the car, optimize it's embedding, and apply it to images of the church. Does this answer your question?

Yeah, thank you for your patient reply!