-
# Questions
I want to compare the performance of Pytorch on 3090 and 1080ti graphics cards. I find that 3090 is no better than 1080ti when training a network and 1x faster than 1080ti when testing …
-
when i export lables on coco2014 and set the resolution to 480*640,CUDA out of memory.
my gpu is rtx3090,and the memory is 24GB
-
I have rtx3090 * 1, rtx 3060 16G * 2, total mem is 24+16 * 2=56G
In this case, is it possible to finetune models?
-
when I try to train my dataset with train_densify_prune.py ,I encounter the error when the processing to 60% .
my machine is unbuntu 20.04 and RTX 4090 GPU
i execute command is python train_densif…
-
the accelerate yaml file
```
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: fp8
…
-
Specs
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.T…
-
Hello, when I enter code with "python tools/train.py configs/swin/upernet_swin_tiny_patch4_window7_512x512_160k_ade20k.py --options model.pretrained=models/upernet_swin_tiny_patch4_window7_512x512.pth…
-
The paper says "The training process takes about 20 minutes each image on one NVIDIA GeForce RTX3090 with a batch size of 1."
I used two RTX3090. However, it already cost three hours and it doesn't s…
-
The author indicated that the graphics card device was Tesla V100, which is relatively high configuration requirements for some general lab. I would like to ask if the author has performed calculation…
-
Thank you very much for your excellent work. I only have one RTX3090. How should I modify the configuration file to achieve the model trained by you on multiple cards?