Closed RainbowRui closed 7 months ago
Hi, we did adjust the learning rate the same way as your description.
Since we scale down the position and scaling of Gaussians by the triangle scaling, we try to scale up the learning rates to have a similar step size in the metric space as the original 3D-GS method.
For position_lr
, we directly scale it by 1/0.032
.
For scaling_lr, we also want to scale it by 1/0.032
. But since the scaling parameter is applied by an exponential function before usage, we scale its learning rate by log(1/0.032)~=3.4
, and obtain 0.017 for scaling_lr
.
Thanks for your patient answer! I have another question: Does GaussianAvatars rely on a really exact tracking results (e.g., the error of tracking is under 1mm per vertex)? Thanks!
It has a good tolerance to tracking errors thanks to our consistent binding of a 3D Gaussian to its parent triangle, producing fewer artifacts compared to baselines. (Relevant results and discussions can be found in the second paragraph of Section 4.2 in our paper.)
Thanks for your quick reply!
Excellent work!
I have a question: how to calculate the values of
self.position_lr_init
andself.scaling_lr
inOptimizationParams
?I find the annotation of "scaled up according to mean triangle scale", does it means the triangle scale in your implementation is
0.00016/0.005=0.032
, i.e.32 mm
. It seems a bit big. Is my calculation wrong? How about to scale upself.scaling_lr
? Thanks!