THU-MIG / yolov10

YOLOv10: Real-Time End-to-End Object Detection
https://arxiv.org/abs/2405.14458
GNU Affero General Public License v3.0
8.16k stars 693 forks source link

Gpus not found #226

Open MuhabHariri opened 1 month ago

MuhabHariri commented 1 month ago

Hi

I am facing this error during training yolov10:

(yolov10) C:\Users\muh\yolov10>yolo detect train data=coco8.yaml model=yolov10n.yaml epochs=1 batch=32 imgsz=640
New https://pypi.org/project/ultralytics/8.2.28 available πŸ˜ƒ Update with 'pip install -U ultralytics'
Ultralytics YOLOv8.1.34 πŸš€ Python-3.9.19 torch-2.0.1+cu118 CUDA:0 (NVIDIA GeForce RTX 3060, 12287MiB)
engine\trainer: task=detect, mode=train, model=yolov10n.yaml, data=coco8.yaml, epochs=1, time=None, patience=100, batch=32, imgsz=640, save=True, save_period=-1, val_period=1, cache=False, device=None, workers=8, project=None, name=train, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=C:\Users\muh\yolov10\runs\detect\train

                   from  n    params  module                                       arguments
  0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
  1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]
  2                  -1  1      7360  ultralytics.nn.modules.block.C2f             [32, 32, 1, True]
  3                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]
  4                  -1  2     49664  ultralytics.nn.modules.block.C2f             [64, 64, 2, True]
  5                  -1  1      9856  ultralytics.nn.modules.block.SCDown          [64, 128, 3, 2]
  6                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]
  7                  -1  1     36096  ultralytics.nn.modules.block.SCDown          [128, 256, 3, 2]
  8                  -1  1    460288  ultralytics.nn.modules.block.C2f             [256, 256, 1, True]
  9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]
 10                  -1  1    249728  ultralytics.nn.modules.block.PSA             [256, 256]
 11                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 12             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 13                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]
 14                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 15             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 16                  -1  1     37248  ultralytics.nn.modules.block.C2f             [192, 64, 1]
 17                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
 18            [-1, 13]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 19                  -1  1    123648  ultralytics.nn.modules.block.C2f             [192, 128, 1]
 20                  -1  1     18048  ultralytics.nn.modules.block.SCDown          [128, 128, 3, 2]
 21            [-1, 10]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 22                  -1  1    282624  ultralytics.nn.modules.block.C2fCIB          [384, 256, 1, True, True]
 23        [16, 19, 22]  1    929808  ultralytics.nn.modules.head.v10Detect        [80, [64, 128, 256]]
YOLOv10n summary: 385 layers, 2775520 parameters, 2775504 gradients, 8.7 GFLOPs

Freezing layer 'model.23.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
Downloading https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt to 'yolov8n.pt'...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.23M/6.23M [00:01<00:00, 3.51MB/s]
Traceback (most recent call last):
  File "C:\Users\muh\anaconda3\envs\yolov10\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\muh\anaconda3\envs\yolov10\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\muh\anaconda3\envs\yolov10\Scripts\yolo.exe\__main__.py", line 7, in <module>
  File "C:\Users\muh\yolov10\ultralytics\cfg\__init__.py", line 594, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "C:\Users\muh\yolov10\ultralytics\engine\model.py", line 657, in train
    self.trainer.train()
  File "C:\Users\muh\yolov10\ultralytics\engine\trainer.py", line 213, in train
    self._do_train(world_size)
  File "C:\Users\muh\yolov10\ultralytics\engine\trainer.py", line 327, in _do_train
    self._setup_train(world_size)
  File "C:\Users\muh\yolov10\ultralytics\engine\trainer.py", line 271, in _setup_train
    self.amp = torch.tensor(check_amp(self.model), device=self.device)
  File "C:\Users\muh\yolov10\ultralytics\utils\checks.py", line 653, in check_amp
    assert amp_allclose(YOLO("yolov8n.pt"), im)
  File "C:\Users\muh\yolov10\ultralytics\utils\checks.py", line 640, in amp_allclose
    a = m(im, device=device, verbose=False)[0].boxes.data  # FP32 inference
  File "C:\Users\muh\yolov10\ultralytics\engine\model.py", line 166, in __call__
    return self.predict(source, stream, **kwargs)
  File "C:\Users\muh\yolov10\ultralytics\engine\model.py", line 441, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "C:\Users\muh\yolov10\ultralytics\engine\predictor.py", line 168, in __call__
    return list(self.stream_inference(source, model, *args, **kwargs))  # merge list of Result into one
  File "C:\Users\muh\anaconda3\envs\yolov10\lib\site-packages\torch\utils\_contextlib.py", line 35, in generator_context    response = gen.send(None)
  File "C:\Users\muh\yolov10\ultralytics\engine\predictor.py", line 255, in stream_inference
    self.results = self.postprocess(preds, im, im0s)
  File "C:\Users\muh\yolov10\ultralytics\models\yolo\detect\predict.py", line 25, in postprocess
    preds = ops.non_max_suppression(
  File "C:\Users\muh\yolov10\ultralytics\utils\ops.py", line 282, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "C:\Users\muh\anaconda3\envs\yolov10\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
    return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
  File "C:\Users\muh\anaconda3\envs\yolov10\lib\site-packages\torch\_ops.py", line 502, in __call__
    return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

Here's some information about the environment and setup I am using:

(yolov10) C:\Users\muh\yolov10>python -c "import torch; print(torch.cuda.device_count(), 'GPUs detected')"
2 GPUs detected

(yolov10) C:\Users\muh\yolov10>python -c "import torch; print(torch.version.cuda)"
11.8

(yolov10) C:\Users\muh\yolov10>python -c "import torch; print(torch.__version__)"
2.0.1+cu118

Please let me know if you can help me solve this problem.

Thanks

MuhabHariri commented 1 month ago

Note: I followed the instructions as provided in the repository documentation

conda create -n yolov10 python=3.9
conda activate yolov10
pip install -r requirements.txt
pip install -e .

However, when I attempted to train the model with the following command: yolo detect train data=coco8.yaml model=yolov10n.yaml epochs=1 batch=32 imgsz=640 device=0,1 I encountered this issue.:

(yolov10) C:\Users\muh\yolov10>yolo detect train data=coco.yaml model=yolov10n.yaml epochs=50 batch=32 imgsz=640 device=0,1
New https://pypi.org/project/ultralytics/8.2.28 available πŸ˜ƒ Update with 'pip install -U ultralytics'
Ultralytics YOLOv8.1.34 πŸš€ Python-3.10.9 torch-2.0.1+cpu
Traceback (most recent call last):
  File "C:\Users\muh\anaconda3\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\muh\anaconda3\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\muh\Anaconda3\Scripts\yolo.exe\__main__.py", line 7, in <module>
  File "C:\Users\muh\yolov10\ultralytics\cfg\__init__.py", line 594, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "C:\Users\muh\yolov10\ultralytics\engine\model.py", line 638, in train
    self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks)
  File "C:\Users\muh\yolov10\ultralytics\engine\trainer.py", line 100, in __init__
    self.device = select_device(self.args.device, self.args.batch)
  File "C:\Users\muh\yolov10\ultralytics\utils\torch_utils.py", line 128, in select_device
    raise ValueError(
ValueError: Invalid CUDA 'device=0,1' requested. Use 'device=cpu' or pass valid CUDA device(s) if available, i.e. 'device=0' or 'device=0,1,2,3' for Multi-GPU.

torch.cuda.is_available(): False
torch.cuda.device_count(): 0
os.environ['CUDA_VISIBLE_DEVICES']: None
See https://pytorch.org/get-started/locally/ for up-to-date torch install instructions if no CUDA devices are seen by torch.

The training does not work when using "device=0,1". It only proceeds when I omit it, resulting in the use of the CPU instead of the GPUs. Therefore, I installed CUDA 11.8, compatible with PyTorch 2.0.1, using the following command: "pip install torch==2.0.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html". After this installation, I encountered the error mentioned at the beginning of this thread.

sofaraway-9527 commented 4 weeks ago

1718134687311 I have some errors like you,my GPU_mem is 0,workers is 0,do you solve it?

MuhabHariri commented 2 weeks ago

@sofaraway-9527 Did you solve it ?

Zhangm0216 commented 1 week ago

@sofaraway-9527 Did you solve it ?

Hello, I meet the same problem with you. Did you solve it ?

Zhangm0216 commented 1 week ago

@sofaraway-9527 Did you solve it ?

I solved this problem just now by installing CUDA 11.7, maybe you can have a try.