Open wangdada-love opened 9 months ago
Hello @wangdada-love ,
Thank you for the interesting question. Let me answer the 2nd one first. DALI is used in MLPerf competition in the benchmarks posted by NVIDIA. Since MLPerf is all about performance, if the native TF would be faster, we'd be using that one ;) Additionally, we do have multitude of success stories (please refer here) that emphasise how DALI helps in data augmentation.
With regards to your firs question, without some additional details it is hard to tell what's happening. Should you like to diagnose what's happening, I'd like to suggest you two things. First, please look at the output of nvidia-smi
and htop
and verify if your worker resources are 100% utilized. If they are not it is likely that you need to tune training hyperparameters (e.g. num_threads
, batch_size
, hw_decoder_load
) to fit into multi-GPU environment. Secondly, you may want to profile your training. You can find many resources and tutorials on profiling using Nsight systems. TLDR - you can invoke your training using nsys
like this:
nsys profile -o my_profile python train.py
And then use Nsight Systems to open captured profile and look what happened.
Describe the question.
I have rewritten my data augmentation methods using the DALI module and applied them to train the DeeplabV3 model based on TensorFlow. However, I have observed that the training speed is faster when using a single GPU, and the speed significantly decreases when training on 4 GPUs. Both my data augmentation methods and the generation of DaliDataset are implemented following the official documentation:https://docs.nvidia.com/deeplearning/dali/user-guide/docs/examples/frameworks/tensorflow/tensorflow-dataset-multigpu.html
My current concerns are as follows:
Check for duplicates