This is also official. It is difficult to set up the entire environment with the original RA-VQA codebase. We provide an implementation based on Huggingface-transformers. This implementation provides support for inference with FLMR and PreFLMR, fine-tuning scripts, and evaluation scripts. You can easily run inference with this implementation.
If you want to fine-tune a blip based on the retrieved documents of PreFLMR, you can run inference, collect the retrieved documents, and write your own code to fine-tune a blip model.
This is also official. It is difficult to set up the entire environment with the original RA-VQA codebase. We provide an implementation based on Huggingface-transformers. This implementation provides support for inference with FLMR and PreFLMR, fine-tuning scripts, and evaluation scripts. You can easily run inference with this implementation. If you want to fine-tune a blip based on the retrieved documents of PreFLMR, you can run inference, collect the retrieved documents, and write your own code to fine-tune a blip model.