InterDigitalInc / CompressAI-Trainer

Training platform for End-to-End compression models, losses and metrics defined in Compressai
https://interdigitalinc.github.io/CompressAI-Trainer/index.html
BSD 3-Clause Clear License
17 stars 1 forks source link

Questions about multi-GPU training #1

Open 1chizhang opened 7 months ago

1chizhang commented 7 months ago

Hi, thanks for your work, I recently wanted to try multi-GPU training, but I realized that its default is to use DataParalle instead of DDP, can you tell me where I can switch to DDP mode?

YodaEmbedding commented 7 months ago

From Catalyst's Runner.train, code and DDP documentation, it looks like one of these should work:

compressai-train  ++engine.ddp=True

OR

compressai-train  ++engine.engine="ddp"

OR

from catalyst import dl

runner.train(engine=dl.DistributedDataParallelEngine())`

P.S. It should automatically use all available detected GPUs. But if not, you may need to export CUDA_VISIBLE_DEVICES="0,1,2,3" to enable GPU/CUDA devices 0, 1, 2, and 3 beforehand.

1chizhang commented 7 months ago

From Catalyst's Runner.train, code and DDP documentation, it looks like one of these should work:

compressai-train  ++engine.ddp=True

OR

compressai-train  ++engine.engine="ddp"

OR

from catalyst import dl

runner.train(engine=dl.DistributedDataParallelEngine())`

P.S. It should automatically use all available detected GPUs. But if not, you may need to export CUDA_VISIBLE_DEVICES="0,1,2,3" to enable GPU/CUDA devices 0, 1, 2, and 3 beforehand.

The first one works, but it raises more problems that I could not solve. Can you provide versions of all the packages you used for testing?

[2024-02-12 12:40:55,693][aim.sdk.reporter][INFO] - creating RunStatusReporter for 8c46505939814d73af135df2 [2024-02-12 12:40:55,693][aim.sdk.reporter][INFO] - starting from: {} [2024-02-12 12:40:55,694][aim.sdk.reporter][INFO] - starting writer thread for <aim.sdk.reporter.RunStatusReporter object at 0x7f8db06d3070> Error executing job with overrides: ['++criterion.lmbda=0.035', '++engine.ddp=True'] Traceback (most recent call last): File "/home/zhan5096/Anaconda/enter/envs/Trainer/bin/compressai-train", line 8, in sys.exit(main()) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/main.py", line 90, in decorated_main _run_hydra( File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra _run_app( File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/_internal/utils.py", line 457, in _run_app run_and_report( File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/_internal/utils.py", line 222, in run_and_report raise ex File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/_internal/utils.py", line 219, in run_and_report return func() File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/_internal/utils.py", line 458, in lambda: hydra.run( File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/internal/hydra.py", line 132, in run = ret.return_value File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/core/utils.py", line 260, in return_value raise self._return_value File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/hydra/core/utils.py", line 186, in run_job ret.return_value = task_function(task_cfg) File "/home/zhan5096/Project/Trainer/compressai-trainer/compressai_trainer/run/train.py", line 100, in main _main(conf) File "/home/zhan5096/Project/Trainer/compressai-trainer/compressai_trainer/run/train.py", line 94, in _main runner.train(**engine_kwargs) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/runners/runner.py", line 377, in train self.run() File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/core/runner.py", line 422, in run self._run_event("on_exception") File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/core/runner.py", line 365, in _run_event getattr(self, event)(self) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/core/runner.py", line 357, in on_exception raise self.exception File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/core/runner.py", line 419, in run self._run() File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/core/runner.py", line 410, in _run self.engine.spawn(self._run_local) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/catalyst/engines/torch.py", line 115, in spawn return mp.spawn( File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 241, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method="spawn") File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes process.start() File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/zhan5096/Anaconda/enter/envs/Trainer/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) File "", line 2, in aimrocks.lib_rocksdb.DB.__reduce_cython TypeError: no default reduce due to non-trivial cinit__

YodaEmbedding commented 7 months ago

I'm guessing the DB object from aim needs to move across processes, but that object does not have a __reduce__ defined for pickling/serializing.


The package requirements are in pyproject.toml. The exact versions used are specified in poetry.lock. Here is an exported requirements.txt:

``` absl-py==2.0.0 accelerate==0.15.0 aim-ui==3.17.5 aim==3.17.5 aimrecords==0.0.7 aimrocks==0.4.0 aiofiles==23.2.1 alembic==1.12.1 annotated-types==0.6.0 antlr4-python3-runtime==4.9.3 anyio==3.7.1 backoff==2.2.1 base58==2.0.1 cachetools==5.3.2 catalyst==22.4 certifi==2023.7.22 cffi==1.16.0 charset-normalizer==3.3.1 click==8.1.7 cmake==3.27.7 colorama==0.4.6 contourpy==1.1.1 cryptography==41.0.5 cycler==0.12.1 exceptiongroup==1.1.3 fastapi==0.104.0 filelock==3.13.0 fonttools==4.43.1 google-auth-oauthlib==1.0.0 google-auth==2.23.3 greenlet==3.0.1 grpcio==1.59.0 h11==0.14.0 hydra-core==1.3.2 hydra-slayer==0.4.1 idna==3.4 importlib-metadata==6.8.0 importlib-resources==6.1.0 jinja2==3.1.2 kiwisolver==1.4.5 lit==17.0.4 mako==1.2.4 markdown==3.5 markupsafe==2.1.3 matplotlib==3.7.3 monotonic==1.6 mpmath==1.3.0 networkx==3.1 numpy==1.24.4 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-cupti-cu11==11.7.101 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 nvidia-cufft-cu11==10.9.0.58 nvidia-curand-cu11==10.2.10.91 nvidia-cusolver-cu11==11.4.0.1 nvidia-cusparse-cu11==11.7.4.91 nvidia-nccl-cu11==2.14.3 nvidia-nvtx-cu11==11.7.91 oauthlib==3.2.2 omegaconf==2.3.0 packaging==23.2 pandas==2.0.3 pillow==9.5.0 plotly==5.18.0 protobuf==4.24.4 psutil==5.9.6 py3nvml==0.2.7 pyasn1-modules==0.3.0 pyasn1==0.5.0 pycparser==2.21 pydantic-core==2.10.1 pydantic==2.4.2 pyparsing==3.1.1 python-dateutil==2.8.2 pytorch-msssim==0.2.1 pytz==2023.3.post1 pyyaml==6.0.1 requests-oauthlib==1.3.1 requests==2.31.0 restrictedpython==6.2 rsa==4.9 scipy==1.10.1 seaborn==0.12.2 segment-analytics-python==2.2.3 setuptools-scm==8.0.4 setuptools==68.2.2 six==1.16.0 sniffio==1.3.0 sqlalchemy==1.4.49 starlette==0.27.0 sympy==1.12 tenacity==8.2.3 tensorboard-data-server==0.7.2 tensorboard==2.14.0 tensorboardx==2.6.2.2 toml==0.10.2 tomli==2.0.1 torch==2.0.0 torchvision==0.15.1 tqdm==4.66.1 triton==2.0.0 typing-extensions==4.8.0 tzdata==2023.3 urllib3==2.0.7 uvicorn==0.23.2 werkzeug==3.0.1 wheel==0.41.2 xmltodict==0.13.0 zipp==3.17.0 ```
faymek commented 3 months ago

Still, multi-gpu have another question

CUDA_VISIBLE_DEVICES=0,1 compressai-train --config-name="example" ++criterion.lmbda=0.035

will report:

DataParallelEngine.prepare_model() got an unexpected keyword argument 'device_placement'

catalyst in this issue suggest to use accelerate==0.5.1. While the current version is 0.15.0

It seems that catalyst and aim is not very friendly at this time. :cry: