Closed hhlgithub closed 6 months ago
Our inference script takes 1h to 10h+ for different tasks. The inference time depends on the eval data size, the input length and the output length. Currenty we don't use batch inference, but you can try to use vllm to speed up the inference.
Could you share the observed inference times for various table-related tasks from your evaluations, and detail the hardware specifications used?