princeton-nlp / SimCSE

[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
MIT License
3.31k stars 502 forks source link

ValueError: Mixed precision training with AMP or APEX (`--fp16`) can only be used on CUDA devices. #273

Closed LittleZ2022 closed 4 months ago

LittleZ2022 commented 4 months ago

thank you so much for your project. I want to train the model on my own corpus. I followed README but got a problem when running ./run_unsup_example.sh.

$ ./run_unsup_example.sh 02/15/2024 12:49:10 - INFO - main - PyTorch: setting up devices Traceback (most recent call last): File "D:\TASK\SimCSE-main\train.py", line 591, in main() File "D:\TASK\SimCSE-main\train.py", line 263, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "D:\Anaconda3\lib\site-packages\transformers\hf_argparser.py", line 157, in parse_args_into_dataclasses obj = dtype(**inputs) File "", line 57, in init File "D:\Anaconda3\lib\site-packages\transformers\training_args.py", line 428, in __post_init__ raise ValueError("Mixed precision training with AMP or APEX (--fp16) can only be used on CUDA devices.") ValueError: Mixed precision training with AMP or APEX (--fp16) can only be used on CUDA devices.

I've checked my CUDA and pytorch version according to my GPU and tried other versions, but still got the same error.

win CUDA=11.0 (when running torch.version.cuda. and CUDA=11.8 when running nvcc -V ) pytorch=1.7.1+cu110 others are in accordance with the requirements of requirements.txt

thanks!