CasparSwift / CLIM

Code for "Cross-Domain Sentiment Classification with Contrastive Learning and Mutual Information Maximization" (ICASSP 2021)
20 stars 3 forks source link

What is this error I am getting with git help please #2

Open sahil14719 opened 2 years ago

sahil14719 commented 2 years ago
' sh train_script.sh'
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data]     C:\Users\hp\AppData\Roaming\nltk_data...
[nltk_data]   Package averaged_perceptron_tagger is already up-to-
[nltk_data]       date!
[nltk_data] Downloading package wordnet to
[nltk_data]     C:\Users\hp\AppData\Roaming\nltk_data...
[nltk_data]   Package wordnet is already up-to-date!
339
350
357
34742 6948 2000
model init
Traceback (most recent call last):
  File "train.py", line 166, in <module>
    model = model_factory[args.model_name](args)
KeyError: None
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data]     C:\Users\hp\AppData\Roaming\nltk_data...
[nltk_data]   Package averaged_perceptron_tagger is already up-to-
[nltk_data]       date!
[nltk_data] Downloading package wordnet to
[nltk_data]     C:\Users\hp\AppData\Roaming\nltk_data...
[nltk_data]   Package wordnet is already up-to-date!
339
350
357
16786 3357 2000
C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py:478: UserWarning: This DataLoader will create 32 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
model init
Initializing main bert model...
Traceback (most recent call last):
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 1331, in from_pretrained
    state_dict = torch.load(resolved_archive_file, map_location="cpu")
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\serialization.py", line 608, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\serialization.py", line 794, in _legacy_load
    deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 294859 more bytes. The file might be corrupted.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 1335, in from_pretrained
    if f.read().startswith("version"):
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 2324: character maps to <undefined>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 166, in <module>
    model = model_factory[args.model_name](args)
  File "C:\Users\hp\CLIM\model.py", line 15, in __init__
    self.bert_model = BertModel.from_pretrained(model_name, config=model_config)
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 1344, in from_pretrained
    raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'bert-base-uncased' at 'C:\Users\hp/.cache\huggingface\transformers\a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data]     C:\Users\hp\AppData\Roaming\nltk_data...
[nltk_data]   Package averaged_perceptron_tagger is already up-to-
[nltk_data]       date!
[nltk_data] Downloading package wordnet to
[nltk_data]     C:\Users\hp\AppData\Roaming\nltk_data...
[nltk_data]   Package wordnet is already up-to-date!
339
350
357
16786 3357 2000
C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py:478: UserWarning: This DataLoader will create 32 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
model init
Initializing main bert model...
Traceback (most recent call last):
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 1331, in from_pretrained
    state_dict = torch.load(resolved_archive_file, map_location="cpu")
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\serialization.py", line 608, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\serialization.py", line 794, in _legacy_load
    deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 294859 more bytes. The file might be corrupted.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 1335, in from_pretrained
    if f.read().startswith("version"):
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 2324: character maps to <undefined>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 166, in <module>
    model = model_factory[args.model_name](args)
  File "C:\Users\hp\CLIM\model.py", line 81, in __init__
    self.bert_model = BertModel.from_pretrained(model_name, config=model_config)
  File "C:\Users\hp\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\modeling_utils.py", line 1344, in from_pretrained
    raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'bert-base-uncased' at 'C:\Users\hp/.cache\huggingface\transformers\a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
CasparSwift commented 2 years ago

It seems that your pretrained weights for "bert-base-uncased" are corrupted. Try to remove the weights in "C:\Users\hp/.cache\huggingface\transformers\" and run it again?