microsoft / CodeBERT

CodeBERT
MIT License
2.19k stars 450 forks source link

Inference task #288

Open weiliang-chen opened 1 year ago

weiliang-chen commented 1 year ago

When I run the inference task on the tutorial_test.json, the program will stop at the 800th batch when I set the per_gpu_eval_batch_size to 1, and it works fine when I run the singleline_test.json. Any help would be appreciated.

image image