+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
when I run adapter_fintuning.py I got this error:
root@ubuntu:/home/project/xlm-t-main# python src/adapter_finetuning.py
Some weights of the model checkpoint at cardiffnlp/twitter-xlm-roberta-base were not used when initializing XLMRobertaModelWithHeads: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
This IS expected if you are initializing XLMRobertaModelWithHeads from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing XLMRobertaModelWithHeads from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of XLMRobertaModelWithHeads were not initialized from the model checkpoint at cardiffnlp/twitter-xlm-roberta-base and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/200 [00:00<?, ?it/s]Traceback (most recent call last):
File "src/adapter_finetuning.py", line 157, in
trainer.train()
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/trainer.py", line 787, in train
tr_loss += self.training_step(model, inputs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/trainer.py", line 1138, in training_step
loss = self.compute_loss(model, inputs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/trainer.py", line 1162, in compute_loss
outputs = model(inputs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, *kwargs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 805, in forward
return_dict=return_dict,
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 685, in forward
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
0%| | 0/200 [00:00<?, ?it/s]
(train_cpu) root@ubuntu:/home/project/xlm-t-main# python src/adapter_finetuning.py
Some weights of the model checkpoint at cardiffnlp/twitter-xlm-roberta-base were not used when initializing XLMRobertaModelWithHeads: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
This IS expected if you are initializing XLMRobertaModelWithHeads from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing XLMRobertaModelWithHeads from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of XLMRobertaModelWithHeads were not initialized from the model checkpoint at cardiffnlp/twitter-xlm-roberta-base and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/200 [00:00<?, ?it/s]Traceback (most recent call last):
File "src/adapter_finetuning.py", line 157, in
trainer.train()
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/trainer.py", line 787, in train
tr_loss += self.training_step(model, inputs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/trainer.py", line 1138, in training_step
loss = self.compute_loss(model, inputs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/trainer.py", line 1162, in compute_loss
outputs = model(inputs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, *kwargs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 805, in forward
return_dict=return_dict,
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "/root/anaconda3/envs/train_cpu/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 685, in forward
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
0%| | 0/200 [00:00<?, ?it/s]
ubuntu16.04 adapter-transformers==1.1.1 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 465.19.01 Driver Version: 465.19.01 CUDA Version: 11.3 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:81:00.0 Off | N/A | | 41% 26C P8 20W / 250W | 0MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
when I run adapter_fintuning.py I got this error:
root@ubuntu:/home/project/xlm-t-main# python src/adapter_finetuning.py Some weights of the model checkpoint at cardiffnlp/twitter-xlm-roberta-base were not used when initializing XLMRobertaModelWithHeads: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
anybody can help?