Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Thank you for the great work. Can you release the Chinese MUGE/RefCOCO-CN fine-tuning scripts.
With the pre-trained OFA-CN model and the English fine-tuning setting, I can't reproduce the reported results in as shown in Checkpoint_cn.md.
Thank you for the great work. Can you release the Chinese MUGE/RefCOCO-CN fine-tuning scripts. With the pre-trained OFA-CN model and the English fine-tuning setting, I can't reproduce the reported results in as shown in Checkpoint_cn.md.