Closed ZhenyanSun closed 8 months ago
Hi Zhenyan,
yes, we used COLMAP for the depth supervision. In our case, the obtained pointclouds for each timestep were quite dense (except for the hair region) since we used 12 cameras for training. One can get even better 3D point reconstructions with more advanced tools like RealityCapture or Metashape. In the paper we ablate the effect of using depth supervision and find that it also works reasonably well without it on our dataset. However, if you want to run NeRSemble on a custom dataset that does not have as many views, it might be that the depth supervision is more important than in our setting with 12 views.
thanks for your detailed explanation.
获取 Outlook for iOShttps://aka.ms/o0ukef
发件人: Tobias Kirschstein @.> 发送时间: Friday, December 8, 2023 8:56:20 PM 收件人: tobias-kirschstein/nersemble @.> 抄送: ZhenyanSun @.>; Author @.> 主题: Re: [tobias-kirschstein/nersemble] how to get the depth map (Issue #7)
Hi Zhenyan,
yes, we used COLMAP for the depth supervision. In our case, the obtained pointclouds for each timestep were quite dense (except for the hair region) since we used 12 cameras for training. One can get even better 3D point reconstructions with more advanced tools like RealityCapture or Metashape. In the paper we ablate the effect of using depth supervision and find that it also works reasonably well without it on our dataset. However, if you want to run NeRSemble on a custom dataset that does not have as many views, it might be that the depth supervision is more important than in our setting with 12 views.
― Reply to this email directly, view it on GitHubhttps://github.com/tobias-kirschstein/nersemble/issues/7#issuecomment-1847119572, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARURAZSONGUABSM545VPDJTYIME7JAVCNFSM6AAAAABAI7RMOSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBXGEYTSNJXGI. You are receiving this because you authored the thread.Message ID: @.***>
Hi, what is the depth absolute or relative. As in, can I get the depth in mm from the dataset ?
Thanks for your excellent work. How can I preprocess my videos if I want to train my own datasets. In your paper mentions that the depth map is calculated by colmap. I want to make sure if colmap is enough?