Closed pablodawson closed 2 weeks ago
Hi, your experiments and attempts are very useful. I can share some of my thoughts.
NTC's Warm-up actually corresponds to the a priori of "the movement between frames is relatively small", and your practice (copy the weights of NTC) actually corresponds to the a priori of "the movement between frames is continuous and smooth", and the latter will have better performance in the case of large motion.
And yes, I encountered such issue before, my solution is increase the number training iterations and use the warm-uped weights.
Hope this helps.
Interesting, thanks for the insights. When using larger iterations, using a scheduler for the lr seems to help as well, as it helps it model the big movements first and then the details. I'll close this for now :)
Hey, thanks for sharing your great work!
An issue I'm encountering when using my own datasets, is that the initial point cloud is very good, then after each new frame the results start to get blurrier and blurrier until it is not recognizable anymore. If you have encountered the same issue, what has helped counter it?
So far changing the iteration number just delays the degradation. I experimented with some hparams, getting the learning rate of the NTC right seems to help. Another unexpected solution was using the previous frame NTC weights instead of the same pre-warmed one, it's like it helps it "preserve momentum" in some way as it applies the same deltas again.
Anyways here's my implementation of your work in gsplat in case you're interested: https://github.com/pablodawson/portals_trainer