NVIDIA / DALI

A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
Apache License 2.0
5.1k stars 615 forks source link

why multi-gpu training slower than single gpu #5250

Open wangdada-love opened 9 months ago

wangdada-love commented 9 months ago

Describe the question.

I have rewritten my data augmentation methods using the DALI module and applied them to train the DeeplabV3 model based on TensorFlow. However, I have observed that the training speed is faster when using a single GPU, and the speed significantly decreases when training on 4 GPUs. Both my data augmentation methods and the generation of DaliDataset are implemented following the official documentation:https://docs.nvidia.com/deeplearning/dali/user-guide/docs/examples/frameworks/tensorflow/tensorflow-dataset-multigpu.html

My current concerns are as follows:

  1. Why is the training slower with multiple GPUs, and even with a batch size set to 64, the GPU memory can still be fully utilized?
  2. Does using the DALI data augmentation method really provide a noticeable speed improvement compared to using TensorFlow's native data augmentation method? Is it meaningful to validate the DALI approach?

Check for duplicates

szalpal commented 9 months ago

Hello @wangdada-love ,

Thank you for the interesting question. Let me answer the 2nd one first. DALI is used in MLPerf competition in the benchmarks posted by NVIDIA. Since MLPerf is all about performance, if the native TF would be faster, we'd be using that one ;) Additionally, we do have multitude of success stories (please refer here) that emphasise how DALI helps in data augmentation.

With regards to your firs question, without some additional details it is hard to tell what's happening. Should you like to diagnose what's happening, I'd like to suggest you two things. First, please look at the output of nvidia-smi and htop and verify if your worker resources are 100% utilized. If they are not it is likely that you need to tune training hyperparameters (e.g. num_threads, batch_size, hw_decoder_load) to fit into multi-GPU environment. Secondly, you may want to profile your training. You can find many resources and tutorials on profiling using Nsight systems. TLDR - you can invoke your training using nsys like this:

nsys profile -o my_profile python train.py

And then use Nsight Systems to open captured profile and look what happened.