autonomousvision / LaRa

[ECCV 2024] Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model
https://apchenstu.github.io/LaRa/
MIT License
250 stars 10 forks source link

Dataloader is the bottleneck in training? #9

Open mengxuyiGit opened 3 weeks ago

mengxuyiGit commented 3 weeks ago

Thanks for the great work and code release!

However, I noticed a significant training speed difference between loading the original uncompressed GObjaverse data and reading from the processed h5 data, where loading h5 is about 6 times slower than the uncompressed one. The GPU util is also very low.

Is this normal?

apchenstu commented 3 weeks ago

Thank you for the kind words! How slow is it? It is supposed to be faster than the uncompressed version since it doesn't need to be decompressed, and the dataset is stored in a hierarchical structure. The GPU utilization is around 90%-100% on my side.