Closed yunusstalha closed 1 month ago
Yes very doable. Our rasterization()
API support only render out the depth map (API link):
render_mode – The rendering mode. Supported modes are “RGB”, “D”, “ED”, “RGB+D”, and “RGB+ED”. “RGB” renders the colored image, “D” renders the accumulated depth, and “ED” renders the expected depth. Default is “RGB”.
That being said, it is very doable but you would need to modify the training script accordingly.
Yes very doable. Our
rasterization()
API support only render out the depth map (API link):render_mode – The rendering mode. Supported modes are “RGB”, “D”, “ED”, “RGB+D”, and “RGB+ED”. “RGB” renders the colored image, “D” renders the accumulated depth, and “ED” renders the expected depth. Default is “RGB”.
That being said, it is very doable but you would need to modify the training script accordingly.
Is there any examples? I want to try train with depth and rgb data.
Yes very doable. Our
rasterization()
API support only render out the depth map (API link):render_mode – The rendering mode. Supported modes are “RGB”, “D”, “ED”, “RGB+D”, and “RGB+ED”. “RGB” renders the colored image, “D” renders the accumulated depth, and “ED” renders the expected depth. Default is “RGB”.
That being said, it is very doable but you would need to modify the training script accordingly.
Is there any examples? I want to try train with depth and rgb data.
one other question is how depth is handled or rasterized? For depth loss, what is the gt depth data ? Is it normalized-single channel image or metric depth ?
one other question is how depth is handled or rasterized? For depth loss, what is the gt depth data ? Is it normalized-single channel image or metric depth ?
as for how depth is handled or rasterized?
,you can scroll down to the button of README.md of gsplat.there is a arxiv paper of explanation of rasterization,for depths loss ,according to the normalization pipeline in calmap,gt depth is produced via unprojecting normed points and pose,and my dataset is RGBD image my strategy is rasterizing the normed points in camera coordinate,I think there's a quicker way in mathematics but have not deducted now
Is it possible to train gsplat using your framework for only generating depthmaps without rgb information ?