utterworks / fast-bert

Super easy library for BERT based NLP models
Apache License 2.0
1.86k stars 341 forks source link

Cant run the learner example on CPU #183

Open Adrizzledefizzle opened 4 years ago

Adrizzledefizzle commented 4 years ago

Hey, I can not run the example on my local machine on CPU. I have an Intel GPU so I cant run it with cuda. Tried to run it on CPU by changing 'cuda' to 'cpu' in device = torch.device("cpu") but that results in the error:

RuntimeError: Found param bert.embeddings.word_embeddings.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you need to provide a model with parameters
located on a CUDA device before passing it no matter what optimization level
you chose. Use model.to('cuda') to use the default device.

as soon as I try to fit it.

Any idea on how to fix it? Are there more settings I have to change if I want to use my CPU for the learner?

Adrizzledefizzle commented 4 years ago

Is there a way to use my Intel GPU instead of NVIDIA?

ldmtwo commented 4 years ago

I also need this to work on CPU. Training on GPU is fine, but I need embeddings and low latency. GPU is not an option.

@Adrizzledefizzle FYI Intel GPU is rarely a viable option. You can look into Intel releases for custom versions made for Intel hardware. Or maybe you just need to install Intel MKL and the current version will work. I'm no longer up to date on the statuses, but there is often extra work needed. https://software.intel.com/en-us/articles/getting-started-with-intel-optimization-of-pytorch

saishashank85 commented 4 years ago

Yes , it not worth it to train it on cpu since it would be 10-20x slower . You can train the model on google colab gpu or kaggle gpu for free Suggest to checkthem out