Closed zl264 closed 9 months ago
Hi @zl264! Can you please provide more information about your training configuration? (e.g. what feature you are using? what is the feature dimension? did you apply --speedup when training?)
Another tip is that usually ~7k iterations should be enough for most of the common used real-world scene datasets. You don't have to run 30k iterations if you want to save time.
Hi @zl264! Can you please provide more information about your training configuration? (e.g. what feature you are using? what is the feature dimension? did you apply --speedup when training?)
Another tip is that usually ~7k iterations should be enough for most of the common used real-world scene datasets. You don't have to run 30k iterations if you want to save time.
Thank you for your reply.I used the default training configuration, trained for 30k iterations, applied --speedup, and utilized lseg, with features of 256 dimensions per Gaussian. Now, I'm experimenting with other training configurations, such as reducing feature dimensions and decreasing training iterations.
Yes, reducing feature dimensions and decreasing training iterations should help, feature dim=128 with --speedup could be a good choice for the time-quality tradeoff. Let me know if you have further questions on this!
Yes, reducing feature dimensions and decreasing training iterations should help, feature dim=128 with --speedup could be a good choice for the time-quality tradeoff. Let me know if you have further questions on this!
Thanks for your open-source work! I use feature dim=128 and --speedup with LSeg, 7000 iters, and it still takes 2h30min on a 3090. Is it normal?
As the title suggests, I found that training feature-3dgs with 30,000 iterations on my own dataset takes 7.5 hours, with most of the time spent on the backward pass. Is this training time normal?