Closed xiazhi1 closed 10 months ago
Sure. The official code is actually a hack that store the gradient during backward. In this codebase, you can simply use means2D.retain_grad() and use the gradient of means2D as the metric for densification. I think it is mathematically equivalent.
Thanks a lot . I have finished it.
Hi @xiazhi1 was wondering if you could show your implementation of the adaptive control.
@hbb1 Hi ! Thanks for awesome work !
Recently , I am trying to add adaptive control based on your torch-rasterizer . I noticed that your code seems not create the screenspoints like official implemention . In official implemention , it first creates the screenspacepoints whose type like guassians.xyz but data is all zero and use retains_grad to retain the gradients of the 2D (screen-space) means. And then it defines means2D= screenspace_points , then pass means2d and other parameters into the rasterizer to render image . Finally it return screenspace_points as viewpoint_tensor . And then use its grad to finish adaptive control . I noticed that yours code 's means2d is created by https://github.com/hbb1/torch-splatting/blob/e2d78419edca7847cc937376ed893e939f76a572/gaussian_splatting/gauss_render.py#L262 . And I want to know how I can get the correct gradients of the 2D (screen-space) means for adaptive control. My current idea is return your code's means2d as viewpoint_tensor and retains its grad for adaptive control. Do you think this is correct? Do you think there have any other way for me to finish adaptive control? Any suggestions is welcomed ! Looking forward to your reply!