mgrankin / ru_transformers

Apache License 2.0
776 stars 108 forks source link

Runtime error #41

Closed Dmitriuso closed 3 years ago

Dmitriuso commented 3 years ago

Hey guys, I've got a problem with the Colab for fine-tuning, it returns a Runtime error every time I try to launch run_lm_finetuning.py:

RuntimeError: Found param transformer.wte.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you need to provide a model with parameters
located on a CUDA device before passing it no matter what optimization level
you chose. Use model.to('cuda') to use the default device.

Maybe I'm doing something wrong... Could you give me a hint? Thank you in advance.

mgrankin commented 3 years ago

Hi, You can try ping the colab author @broAir

Dmitriuso commented 3 years ago

@mgrankin Thank you, I'll try to 👍

mariapotashnyk commented 3 years ago

I've got the same error by now. @Dmitriuso did you get any help?

Dmitriuso commented 3 years ago

I've got the same error by now. @Dmitriuso did you get any help?

%set_env CUDA_VISIBLE_DEVICES=0 that should work