Open moon5756 opened 4 years ago
First, you hardware configuration seems quite good. When we work with any 3d network, it is important. For the training configuration, I remember that I used 3 GPU, so it is normal that the initial batch size doesn't fit into 1 single GPU. My advice is to allow make sure the batch size is always greater than 32 (it make sure the statistics you capture help the model to generalize well). Another advice is to use apex library for mix precision training (https://github.com/NVIDIA/apex). The library wasn't mature enough when I did this project.
Another important thing is the loading. If you have SSD, make sure your data are stored on it (it greatly increase the loading speed). A poor loading leads to GPU starvation, which slow down the training process.
The training could take several days (depending on your configuration).
If you like you like this project, feel free to leave a star. (it is my only reward ^^)
Thanks for the prompt response. Based on the tensorboard, it seems like yours also took like a day and 14hrs for 350 epochs. Thanks for the suggestion though. I already left the star.
Hi, thanks for the really helpful work.
I just wonder how long it took for the training. My desktop has the following cpu and gpu.
cpu: Intel Core i7-6900K CPU @ 3.2GHz SSD: Sanmsung SSD 850 EVO gpu: NVIDIA GeForce RTX 2080 TI
I ran the training script and it says active GPUs: 0, from which I can tell the my GPU is properly processing. I changed the size of batch to 50 in config.json because it complained about OOM issue.
I ran the script for about 23 min and it only completed one epoch. One concern is that the utilization of CPU is like 99% but utilization of GPU is less than 10%. Any configuration I need to change to fully utilize GPU? Following is the command line log.
// EDIT: Wait a sec... I just checked the tensorboard.. and is it supposed to take more than 1 day?