Closed TLB-MISS closed 3 months ago
Additional Question
I tried below code(in the README) with default setting
python train.py --config configs/flower.txt
But results are quite bad.
Below 4 images are outputs of last iteration. They are no improvement at all. Are the provided configuration settings not same with the configuration in the thesis experiment?
Thanks for reaching out. Your results are reasonable from my side and basically the same as those from my trial. To improve the results, you may need better initial labels. One way of getting them is to run IEM with customized parameters.
I will finish the interface asap, hopefully after my current projects. One way to bypass this is to run colmap by yourself on CO3D scenes and use the provided interfaces.
Thanks for quick reply!
As far as I know, CO3D converts colmap results into Quaternions and provides them. However, regardless of camera parameters, I am curious about setting magic numbers such as self.near_far and self.scene_bbox.
Thanks for reaching out. Your results are reasonable from my side and basically the same as those from my trial. To improve the results, you may need better initial labels. One way of getting them is to run IEM with customized parameters.
And aren't the masks you provide here the initial ones from IEM?
For CO3D scenes, I remember we used scene_bbox [[-10, -10, -10], [10, 10, 10]]. Not 100% sure, but you can adjust it based on your observation. From our experiments, it is not easy to achieve the sound novel view synthesis on CO3D dataset, even with another model (instant-ngp, other nerf implementations).
I got better masks than what I provided, using more customized parameters. You can check the IEM paper and repo for more insights. If you are looking for perfect masks for flower and fortress, you can find them here.
Thank you for sharing!
@DarlingHang
I will finish the interface asap, hopefully after my current projects. One way to bypass this is to run colmap by yourself on CO3D scenes and use the provided interfaces.
Sorry for the rush, but when will the CO3D interface be implemented? If it takes some time, could you check out my code(CO3D with colmap) instead? I used given CO3D masks as initial masks. But the output segmentation is weird.
I first got poses_bounds.npy
from the colmap output by referring to the code here.
Next, I set the three variables below.
self.near_far = [0.0, 1.0]
self.scene_bbox = torch.tensor([[-8., -8., -8.], [8., 8., 8.]])
scale_factor = near_original * 9
Output Example
I'm not sure if my implementation is wrong, or if my magic numbers (self.scene_bbox
, self.near_far
, scale_factor
, etc) settings are wrong.
If you need, I'll give my code and mini dataset to you.
Thanks.
You can verify whether there's something wrong with bbox/near_far by checking the rendered rgb image. In most cases, colmap poses are not good enough for CO3D to give photo-realistic rendering.
Also, this object is too textured and I guess our method wouldn't work well on it. Try to use something as easy as LLFF flowers.
So if I want to get better masks, I have to provide better initial masks?
Hi. First of all, thank you for your wonderful work. When will the works in TODO be updated, especially "Interface for CO3D"?
Thank you in advance.