Open Nuveyla opened 2 years ago
What is the version of your transformers
? Our code needs transformers==3.0.0
for running. It seems that your transformers have a higher version.
(Not suggested) If you want to run the code with a higher version of transformers, you need to modify the tokenizer settings in this line of flair/embeddings.py into:
self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)
Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks
Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks
Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:
self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)
Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks
Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:
self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)
Yes, I did that. But still the same error.
Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks
Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:
self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)
Yes, I did that. But still the same error.
Can you post the screenshot of the error?
I'm not sure for the reason. I install a new environment based on requirements.txt
and I cannot reproduce the error. It seems that there is something wrong with the input batch. Maybe you can use pdb
to find out what happened in the code.
Moreover, I find that Device: cpu
in your log. Currently, our code does not support running without GPU. Maybe this is the reason for the error.
Hi, I meet this issue too. Three possible reasons may contribute to it:
1 transformers version need to be 3.0.0
2 torch must use GPU version
3 incompatible GPU and Cuda. In this case, you could still pass torch.cuda.is_available() but meet CUDA error: no kernel image is available for execution on the device. It is caused by higher torch version with GPU. For example, when I use Tesla K40 with torch1.7.0 cuda10.1, it would raise this issue, but degrading the torch to 1.3.0 would solve it.
Hey,
When running: python train.py --config config/wnut17_doc_cl_kl.yaml, with the original code (only change in paths) I run into an error that the loss is referenced before assignment. See the following screenshot:
The given TypeError causes this issue. I have tried the option to add is_split_into_words=True into line 3171 in embeddings.py. This gave a new error: with again same result (no assignment of loss). What can be the cause of this?