Closed Mjvolk3 closed 1 year ago
Hi, I have to admit that I only ran very limited experiments on CPU but you are right, I get the same error now. And you are also correct in saying that the line you quoted should handle exactly this: casting the model to full precision to make it work on CPU as well. I only now realized that the if does not evaluate to what I expected it to evaluate to (it never enters the model.full() branch). As a quick fix, you can simply cast the model to full precision via: model.to(torch.float32) (Maybe this functionality changed with more recent versions of huggingface/transformers but the model.full() function does not work anymore).
I've now also updated the colab accordingly. Should work out-of-the-box now
If I change the colab runtime to in the colab notebook to cpu I get the following error.
I thought this issue would be taken care of by the following line.
Could you help me with using the model on a cpu? Thanks.