I'm evaluating the model on a relatively large dataset (single question, single answer). I was able to fine-tune the Bunny-1.1-Llama-3-8B-V model using one of the scripts provided. What is the best strategy to implement batch inference?
Sorry for that we don't support batch inference currently. You may split the dataset into multiple parts and launch a model on each GPU, like evaluating on VQA, GQA and SEED-Bench.
Hi!
I'm evaluating the model on a relatively large dataset (single question, single answer). I was able to fine-tune the Bunny-1.1-Llama-3-8B-V model using one of the scripts provided. What is the best strategy to implement batch inference?