Closed yuheyuan closed 1 year ago
The flags n_gpus
and gpu_model
are for internal purposes only and have no functionality in this repository. I forgot to remove them.
However, you can specify the GPU by setting cfg['gpu_ids']
.
You can modify the apis\train.py file as follows
When I run daformer,It's Ok. But , I run HRDA, it occour CUDA out of memory. I want to change GPU 0 to GPU 1 But I don't know how to change it. Usually, I use code to specify GPU by code
But it dosen't work in this work. I find code in your gtaHR2csHR_hrda.py
is this gpu_model should change? My gpus are two 3090. So I want to know how to change GPU in this code. defualt is GPU 0 Or how to change configs to make the code successfully.
Maybe GPU 1 is used, I specify GPU 1, but in pytorch the index of GPU 1 become GPU 0.Then it occour this problem.
So, I want to know if 3090 can run this code. Or change the configs to make this run.