Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Hello everyone, right now I am trying to inference image captioning using OFA/Huge fine-tuned on CoCo with about 48k images ,but I am facing very slow speed due to 1 image per batch ( about 1 image / sec which means I have to wait for about 13 hours to inference entire dataset). is there any way to do batch inference on my test set and still keeping beam search generation ?
Hello everyone, right now I am trying to inference image captioning using OFA/Huge fine-tuned on CoCo with about 48k images ,but I am facing very slow speed due to 1 image per batch ( about 1 image / sec which means I have to wait for about 13 hours to inference entire dataset). is there any way to do batch inference on my test set and still keeping beam search generation ?