Traceback (most recent call last):
File "/home/cong009/.pycharm_helpers/pydev/pydevd.py", line 1415, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/cong009/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/mnt/laiqinghan/project/1_FUTU/llmModels/accelerate_chatglm6b/singleGPU_semantic_finetuning.py", line 232, in
main()
File "/mnt/laiqinghan/project/1_FUTU/llmModels/accelerate_chatglm6b/singleGPU_semantic_finetuning.py", line 217, in main
out = model(input)
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "/home/cong009/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b/35ca52301fbedee885b0838da5d15b7b47faa37c/modeling_chatglm.py", line 1215, in forward
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/functional.py", line 3026, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (224) to match target batch_size (66).
有大佬知道啥原因吗
Expected Behavior
No response
Steps To Reproduce
a
Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :
Is there an existing issue for this?
Current Behavior
Traceback (most recent call last): File "/home/cong009/.pycharm_helpers/pydev/pydevd.py", line 1415, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/home/cong009/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/mnt/laiqinghan/project/1_FUTU/llmModels/accelerate_chatglm6b/singleGPU_semantic_finetuning.py", line 232, in
main()
File "/mnt/laiqinghan/project/1_FUTU/llmModels/accelerate_chatglm6b/singleGPU_semantic_finetuning.py", line 217, in main
out = model(input)
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "/home/cong009/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b/35ca52301fbedee885b0838da5d15b7b47faa37c/modeling_chatglm.py", line 1215, in forward
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/cong009/miniconda3/envs/LLM/lib/python3.8/site-packages/torch/nn/functional.py", line 3026, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (224) to match target batch_size (66).
有大佬知道啥原因吗
Expected Behavior
No response
Steps To Reproduce
a
Environment
Anything else?
No response