Alibaba-NLP / CLNER

[ACL-IJCNLP 2021] Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning
Other
91 stars 15 forks source link

pdb: loss reference before assignment #15

Open Nuveyla opened 2 years ago

Nuveyla commented 2 years ago

Hey,

When running: python train.py --config config/wnut17_doc_cl_kl.yaml, with the original code (only change in paths) I run into an error that the loss is referenced before assignment. See the following screenshot:

image

The given TypeError causes this issue. I have tried the option to add is_split_into_words=True into line 3171 in embeddings.py. This gave a new error: image with again same result (no assignment of loss). What can be the cause of this?

wangxinyu0922 commented 2 years ago

What is the version of your transformers? Our code needs transformers==3.0.0 for running. It seems that your transformers have a higher version.

(Not suggested) If you want to run the code with a higher version of transformers, you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)
manzoorali29 commented 2 years ago

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

wangxinyu0922 commented 2 years ago

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)
manzoorali29 commented 2 years ago

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)

Yes, I did that. But still the same error.

wangxinyu0922 commented 2 years ago

Hi, My transformer version is 3.0.0. I am still facing the above-mentioned issue. Is there any other solution available? Or Nuveyla have you found any solution to the problem? Thanks

Hi, have you tried to modify the tokenizer settings in you need to modify the tokenizer settings in this line of flair/embeddings.py into:

self.tokenizer = AutoTokenizer.from_pretrained(model, use_fast = False, **kwargs)

Yes, I did that. But still the same error.

Can you post the screenshot of the error?

manzoorali29 commented 2 years ago

image

wangxinyu0922 commented 2 years ago

image

I'm not sure for the reason. I install a new environment based on requirements.txt and I cannot reproduce the error. It seems that there is something wrong with the input batch. Maybe you can use pdb to find out what happened in the code.

Moreover, I find that Device: cpu in your log. Currently, our code does not support running without GPU. Maybe this is the reason for the error.

Chenfeng1271 commented 2 years ago

image

Hi, I meet this issue too. Three possible reasons may contribute to it:

1 transformers version need to be 3.0.0

2 torch must use GPU version

3 incompatible GPU and Cuda. In this case, you could still pass torch.cuda.is_available() but meet CUDA error: no kernel image is available for execution on the device. It is caused by higher torch version with GPU. For example, when I use Tesla K40 with torch1.7.0 cuda10.1, it would raise this issue, but degrading the torch to 1.3.0 would solve it.