Closed zhuliyi0 closed 5 months ago
The current version does not support the evaluation during fine-tuning.
Please feel free to reopen this issue if you have any further questions. Closing this issue due to inactivity.
The current version does not support the evaluation during fine-tuning.
Is this supported now for internLM-XComposer-2.5?? If not then when will it be supported?
inside finetune.py, eval_dataset is set to None, and this gives error when eval is called.
I tried to replicate a similar dataset like the training set, and set value to eval_dataset, but with no luck. No very familiar with transformer code.
Need help. Thanks in advance!
8%|██████████▌ | 10/120 [03:04<33:35, 18.32s/it]Traceback (most recent call last): File "/root/InternLM-XComposer/finetune/finetune.py", line 311, in
train()
File "/root/InternLM-XComposer/finetune/finetune.py", line 301, in train
trainer.train()
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1553, in train
return inner_training_loop(
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1927, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2254, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2964, in evaluate
eval_dataloader = self.get_eval_dataloader(eval_dataset)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 879, in get_eval_dataloader
raise ValueError("Trainer: evaluation requires an eval_dataset.")
ValueError: Trainer: evaluation requires an eval_dataset.