open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.45k stars 9.43k forks source link

COCODataset instantiation ignores shared memory settings during annotation loading in distributed training #11318

Open h-fernand opened 10 months ago

h-fernand commented 10 months ago

Describe the bug The COCODataset implementation uses the COCO API to load in dataset annotations upon initialization. However, since a COCODataset instance is created once for every GPU worker, the dataset annotations get loaded in once for each worker. This works fine for smaller datasets, but with larger datasets this very quickly eats up all system RAM (NOT GPU RAM) when using multiple GPUs. It appears that the BaseDataset class intends to set serialize_data to True by default which should result in the dataset being shared across GPU workers, but this does not appear to work with COCODataset since the actual loading of annotations happens before the data ever has a chance to be serialized.

Reproduction

  1. What command or script did you run?
tools/dist_train.sh /path/to/my/config.py 2 --auto-scale-lr
  1. Did you make any modifications on the code or config? Did you understand what you have modified? The only modifications I made to my config file were to point it to my dataset location.
  2. What dataset did you use? A 120GB custom COCO format instance segmentation dataset containing a high volume of instances per image (~100-250 instances per image) Environment
sys.platform: linux
Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0,1: NVIDIA RTX A5500
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.1, V11.1.105
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.9.0
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) oneAPI Math Kernel Library Version 2021.2-Product Build 20210312 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.1
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.0.5
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.10.0
OpenCV: 4.8.1
MMEngine: 0.10.1
MMDetection: 3.2.0+fe3f809

Error traceback There is no applicable traceback. During the "loading annotations" phase before training when analyzing system RAM (my personal workstation has 256GB of RAM) the memory usage will steadily climb until it hits 256GB and the worker is killed.

Bug fix See above. This occurs due to the dataset being loaded by the pycocotools API once for every GPU.

h-fernand commented 9 months ago

Has anyone else been able to look into this or run into this issue? There is no reason this shouldn't be fixable, but it seems like it might be a fairly significant undertaking