ShijieZhou-UCLA / feature-3dgs

[CVPR 2024 Highlight] Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields
Other
381 stars 25 forks source link

Is it normal for training a feature-3dgs with 30,000 iterations to take 7.5 hours on a 3090? #1

Closed zl264 closed 9 months ago

zl264 commented 10 months ago

As the title suggests, I found that training feature-3dgs with 30,000 iterations on my own dataset takes 7.5 hours, with most of the time spent on the backward pass. Is this training time normal?

ShijieZhou-UCLA commented 10 months ago

Hi @zl264! Can you please provide more information about your training configuration? (e.g. what feature you are using? what is the feature dimension? did you apply --speedup when training?)

Another tip is that usually ~7k iterations should be enough for most of the common used real-world scene datasets. You don't have to run 30k iterations if you want to save time.

zl264 commented 10 months ago

Hi @zl264! Can you please provide more information about your training configuration? (e.g. what feature you are using? what is the feature dimension? did you apply --speedup when training?)

Another tip is that usually ~7k iterations should be enough for most of the common used real-world scene datasets. You don't have to run 30k iterations if you want to save time.

Thank you for your reply.I used the default training configuration, trained for 30k iterations, applied --speedup, and utilized lseg, with features of 256 dimensions per Gaussian. Now, I'm experimenting with other training configurations, such as reducing feature dimensions and decreasing training iterations.

ShijieZhou-UCLA commented 10 months ago

Yes, reducing feature dimensions and decreasing training iterations should help, feature dim=128 with --speedup could be a good choice for the time-quality tradeoff. Let me know if you have further questions on this!

lziiid commented 2 months ago

Yes, reducing feature dimensions and decreasing training iterations should help, feature dim=128 with --speedup could be a good choice for the time-quality tradeoff. Let me know if you have further questions on this!

Thanks for your open-source work! I use feature dim=128 and --speedup with LSeg, 7000 iters, and it still takes 2h30min on a 3090. Is it normal?