Open jiminbot20 opened 3 years ago
Maybe this is happening because that gpu_id variable only affects the training of the downstream model. Does this happen while wav2vec 2.0 features are being extracted? A possible workaround is to run in a terminal:
export CUDA_VISIBLE_DEVICES=2
and only the GPU number 2 should be used
https://github.com/habla-liaa/ser-with-w2v2/blob/c9be8a9bc8c6c6969838e3e77c2fc8af10e27136/configs/models/dienen_classifier.yaml#L2
Even if I changed gpu number 0 into 2 or 3, still GPU 0 is allocated.
How could I change it?