ultralytics / yolov5

YOLOv5 šŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.82k stars 16.37k forks source link

Training on multiple GPU on HPC servers #3897

Closed Mai-Sirhan closed 3 years ago

Mai-Sirhan commented 3 years ago

I am using a virtual machine on Microsoft azure. i have a 18.04 Ubuntu and four Tesla K80 GPU's. I have run the model through this command python -m torch.distributed.launch --nproc_per_node 4 train.py. unfortunately i got these errors:


           CHILD PROCESS FAILED WITH NO ERROR_FILE

CHILD PROCESS FAILED WITH NO ERROR_FILE Child process 6597 (local_rank 0) FAILED (exitcode 1) Error msg: Process failed with exitcode 1 Without writing an error file to <N/A>. While this DOES NOT affect the correctness of your application, no trace information about the error will be available for inspection. Consider decorating your top level entrypoint function with torch.distributed.elastic.multiprocessing.errors.record. Example:

from torch.distributed.elastic.multiprocessing.errors import record

@record def trainer_main(args):

do train


warnings.warn(_no_error_file_warning_msg(rank, failure)) Traceback (most recent call last): File "/anaconda/envs/yolov5/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/anaconda/envs/yolov5/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/anaconda/envs/yolov5/lib/python3.6/site-packages/torch/distributed/launch.py", line 173, in main() File "/anaconda/envs/yolov5/lib/python3.6/site-packages/torch/distributed/launch.py", line 169, in main run(args) File "/anaconda/envs/yolov5/lib/python3.6/site-packages/torch/distributed/run.py", line 624, in run )(cmd_args) File "/anaconda/envs/yolov5/lib/python3.6/site-packages/torch/distributed/launcher/api.py", line 116, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/anaconda/envs/yolov5/lib/python3.6/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 348, in wrapper return f(args, **kwargs) File "/anaconda/envs/yolov5/lib/python3.6/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:


        train.py FAILED

======================================= Root Cause: [0]: time: 2021-07-05_14:55:32 rank: 0 (local_rank: 0) exitcode: 1 (pid: 6597) error_file: <N/A> msg: "Process failed with exitcode 1"

Other Failures: [1]: time: 2021-07-05_14:55:32 rank: 1 (local_rank: 1) exitcode: 1 (pid: 6598) error_file: <N/A> msg: "Process failed with exitcode 1" [2]: time: 2021-07-05_14:55:32 rank: 2 (local_rank: 2) exitcode: 1 (pid: 6599) error_file: <N/A> msg: "Process failed with exitcode 1" [3]: time: 2021-07-05_14:55:32 rank: 3 (local_rank: 3) exitcode: 1 (pid: 6600) error_file: <N/A> msg: "Process failed with exitcode 1"


I will be happy for helping?

github-actions[bot] commented 3 years ago

šŸ‘‹ Hello @Mai-Sirhan, thank you for your interest in šŸš€ YOLOv5! Please visit our ā­ļø Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a šŸ› Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ā“ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 3 years ago

@Mai-Sirhan I would not recommend any number of K80 GPUs if you can avoid them as they are as slow as you can get.

In any case if you are training DDP you should start with the Multi-GPU Training tutorial and always train in our Docker image.

YOLOv5 Tutorials

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Mai-Sirhan commented 3 years ago

Hi @glenn-jocher,

I run the model like the instruction in your tutorial.
I installed the environment as you described pip install -r requirements.txt. after that I ran the model using your command python -m torch.distributed.launch --nproc_per_node 4 train.py.

What could be the problem can be in my case?

Another question,

What kind of GPU's that you recommend?

glenn-jocher commented 3 years ago

@Mai-Sirhan as I said above run all DDP trainings in our Docker image.

glenn-jocher commented 3 years ago

@Mai-Sirhan if you are running cloud instances 1-2 T4s will probably serve you better than 4 K80s and save you money as well.

Mai-Sirhan commented 3 years ago

Hi @glenn-jocher

I followed your advice and installed your docker. after that I want to run the docker on my local version of yolov5 not on the latest version on GitHub, how I can do that?

thank you

abuelgasimsaadeldin commented 3 years ago

Hi @glenn-jocher, I am trying to train on 2 GeForce RTX 2080 Ti GPUs using DDP training and I receive the same error above as @Mai-Sirhan. I have followed all the instructions from cloning of the repo and installation of the requirements, my question is will the DDP training only work on a Docker container or is this issue fixable? Thank you.

Edit: Using - torch 1.9.0+cu102 and torchvision 0.10.0+cu102

glenn-jocher commented 3 years ago

@abuelgasimsaadeldin šŸ‘‹ hi, thanks for letting us know about this problem with YOLOv5 šŸš€. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the šŸ› Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! šŸ˜ƒ

github-actions[bot] commented 3 years ago

šŸ‘‹ Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 šŸš€ resources:

Access additional Ultralytics āš” resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 šŸš€ and Vision AI ā­!