Open weiliang-chen opened 1 year ago
When I run the inference task on the tutorial_test.json, the program will stop at the 800th batch when I set the per_gpu_eval_batch_size to 1, and it works fine when I run the singleline_test.json. Any help would be appreciated.
When I run the inference task on the tutorial_test.json, the program will stop at the 800th batch when I set the per_gpu_eval_batch_size to 1, and it works fine when I run the singleline_test.json. Any help would be appreciated.