Closed jly0810 closed 6 months ago
We have already released the training code, which encompasses the generation part of the entire pipeline. We plan to release the editing code in early April.
During our experiments, we utilized a single Nvidia A100 GPU. The training stage for Nerf requires approximately 12GB of GPU memory, while the DMTet training stage requires around 24GB. Therefore, our code should be compatible with any GPU that meets the minimum GPU requirements, such as the V100. In terms of training time, the Nerf stage takes approximately 10 minutes, while the DMtet stage takes around 35 minutes.
Will training and editing code be released in the future? What hardware resources are needed for training this job (graphics card model and memory), and how long is the training time approximately?