horizon-research / Fov-3DGS

Official Implementation of RTGS: Enabling Real-Time Gaussian Splatting on Mobile Devices Using Efficiency-Guided Pruning and Foveated Rendering.
MIT License
39 stars 6 forks source link

VRAM of GPUs for training and reasoning #1

Closed JiatengLiu closed 2 months ago

JiatengLiu commented 2 months ago

Hello! I noticed that you completed the training on the Nvidia Jetson Xavier board, and I would like to ask you how much VRAM is used in the training and reasoning process? waiting for your reply!

linwk20 commented 2 months ago

We performed training on an RTX 4090 GPU and used a Jetson for inference rendering. The 4090's 24GB memory is sufficient for all scenes tested during training. For inference on Jetson, we needed to disable data caching on GPU memory to prevent out-of-memory (OOM) errors. We achieved this by using fps_mode as shown in this code snippet. After disabling data caching, we successfully ran all scenes on Jetson. It generally takes less than 5GB memory if i remember correctly.

JiatengLiu commented 2 months ago

Thanks for your reply! 5GB is very friendly for most embedded development boards, It's really a commendable job!

At 2024-07-11 05:14:04, "linwk20" @.***> wrote:

We performed training on an RTX 4090 GPU and used a Jetson for inference rendering. The 4090's 24GB memory is sufficient for all scenes tested during training. For inference on Jetson, we needed to disable data caching on GPU memory to prevent out-of-memory (OOM) errors. We achieved this by using fps_mode as shown in this code snippet. After disabling data caching, we successfully ran all scenes on Jetson. It generally takes less than 5GB memory if i remember correctly.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>