CUHK-AIM-Group / EndoGaussian

EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction
https://yifliu3.github.io/EndoGaussian/
MIT License
100 stars 5 forks source link

How to reimplement with expected training time on NVIDIA RTX A5000 #6

Closed ysjue closed 6 months ago

ysjue commented 6 months ago

Hi, great thanks for updating this repo! When I ran the code, I found the training time for pulling seq is 3 min (coarse + fine) which is slightly longer than it should be. I am not sure if this is due to a hardware issue but the GPU consumption looks fine. Could you tell me how you calculate the training time and any possibility of reaching the reported training time with different hardware? Many thanks!

yifliu3 commented 6 months ago

Hi,

We calculate the training time by referring to the (ending training time - beginning training time). and you can directly find such information during the training period. For example: WechatIMG36

we find the training and inference time may be related to the used hardware, including both GPU and CPU. As mentioned in the paper, we use a single RTX 4090 a single CPU Intel(R) Xeon(R) Gold 5418Y.

In the previous version, a bug exists during the data loading. In each iteration, the data needs to be loaded by CPU, causing longer time if not enough CPU is provided. After fixing this bug, the training can take around 2 minutes on a single CPU and a single GPU. Please refer to the updated code.