Closed murat-gunay closed 2 years ago
Same problem here on my Mac with Python 3.9.9 managed by Conda. Here is the error:
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/pyeurovoc/__init__.py", line 122, in __call__
logits = self.model(
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/pyeurovoc/model.py", line 30, in forward
cls_embedding = self.lang_model(x, attention_mask=mask)[0][:, 0, :]
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 996, in forward
encoder_outputs = self.encoder(
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 562, in forward
if self.gradient_checkpointing and self.training:
File "/Users/bob/opt/miniconda3/envs/nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'BertEncoder' object has no attribute 'gradient_checkpointing'
Not sure this applies but @avramandrei might know. Taken from googling a similar error on the pythorch group:
From the error message, it looks like you used torch.save to save your whole model (and not the weights), which is not recommended at all because when the model changes (like it did between 4.10 and 4.11) you then can’t reload it directly with torch.load.
Our advice is to always use save_pretrained/from_pretrained to save/load your models or if it’s not possible, to save the weights (model.state_dict) with torch.save and then reload them with model.load_state_dict, as this will works across different versions of the models.
In my environment this is the PyTorch I have:
pytorch 1.8.0 cpu_py39h4f2e8f6_1 conda-forge
Hi,
Thank you for reporting the error! There seems to have been some modifications to the Transformer library and the model can not be loaded anymore using the latest version of the library.
I will try to fix this in the next week. Until then, please install transformers==4.9.2
. This is the version I have been working with and it seems to be stable.
I have also updated the requirements.txt
file to point to this version for the moment.
If anyone with a Conda managed environment wants to do so, there is no mantained package with that transformers version, but you can still get what you need by:
a) download the Conda source for transformers 4.9.2 https://anaconda.org/conda-forge/transformers/files?version=4.9.2
b) install it in your environment with
conda install <downloaded_file>
Of course it would be wiser to install all of the requirements libraries in a dedicated conda environment.
I don't see any error in the log, only a warning which should not case the code to stop. Could you print the rest of the log?
Ooops very sorry my bad, of course you are right, it's only a warning. Deleted the misleading post.