zeng-yifei / STAG4D

Official Implementation for STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians
111 stars 5 forks source link

How to improve the Texture Details #9

Closed sauradip closed 1 week ago

sauradip commented 4 weeks ago

Hi ,

Thanks for opensourcing awesome work. I read the tips to improve results, but still feel texture is a bit lacking than existing models, how to improve the texture ? any tips during training ?

zeng-yifei commented 4 weeks ago

For most cases, if you don't care about the time cost at all, improving batch size to 4~5 and using training steps to 12000 usually produces results that are good enough. Remember that marginal benefits will decrease if you continue to improve these two params.

I will also check again if there is something I forgot to add when I arrange the code. Thank you for your feedback anyway.

sauradip commented 3 weeks ago

Also i have some query on "densify_grad_threshold" parameter, i know normally it is 0.00x in image variant but i felt , the STAG4D outputs a bit blurry because of improper pruning/high pruning ! Can you check if possible ? Also anytips on improving the sharpness ? I guess this has something to do with the densification and pruning ?

zeng-yifei commented 3 weeks ago

The original densification process will not be influenced by the densify_grad_threshold in the config. It was influenced by line 579 of gs_renderer_4d.py max_grad_2 = torch.exp(grad_log3.squeeze(dim=1).sort(descending=True)[0][int(0.025*grad_log3.shape[0])]) You can modify the percentage value 0.025 to control the densification process. And I have just fix the code to make it changable in the config file with the parameter densify_grad_threshold_percent. You can pull the new code and check it out.

zeng-yifei commented 3 weeks ago

BTW, you can also try to use more points by decreasing the densify_grad_threshold_percent or adding more initial points with num_pts in the config. But the result is not guaranteed to be better from my experience. In principle, the result is actually determined by the ability of Zero123++ and Zero123. For the cases that are similar to the Objarvese set which they were trained on, the result will be better. Otherwise, if the scene is out of distribution, the result will be influenced correspondingly.