joeyz0z / MeaCap

(CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning
33 stars 0 forks source link

Unchanged things in these local paths? #4

Closed AnnyShen55 closed 2 months ago

AnnyShen55 commented 2 months ago

Hello! Thank you for your great work! i'm just wondering in the viecap_inference.py file, there are couple of things that use your local path, and i just want to confirm that there are nothing changed from the versions on the internet? for example, i just download the gpt2 for the following line parser.add_argument('--language_model', default = 'F:\ImageText\MeaCap-family\MeaCap-vie\gpt2') and download the following things from huggingface 123 | parser.add_argument('--vl_model', type=str, default=r'G:/HuggingFace/clip-vit-base-patch32') 124 | parser.add_argument("--parser_checkpoint", type=str, default=r'G:/HuggingFace/flan-t5-base-VG-factual-sg') 125 | parser.add_argument("--wte_model_path", type=str, default=r'G:/HuggingFace/all-Mini-L6-v2') then everything would work fine? And do I need to modify any of the other lines? Thank you for your help in advance!

joeyz0z commented 2 months ago

I have changed the local paths to the correct huggingface project names in viecap_inference.py. You can download the offline weights from huggingface space or directly run viecap_inference.py with automated downloading.

parser.add_argument('--language_model', default = 'openai-community/gpt2') parser.add_argument('--vl_model', type=str, default=r'openai/clip-vit-base-patch32') parser.add_argument("--parser_checkpoint", type=str, default=r'lizhuang144/flan-t5-base-VG-factual-sg') parser.add_argument("--wte_model_path", type=str, default=r'sentence-transformers/all-MiniLM-L6-v2')

AnnyShen55 commented 2 months ago

wow, thank you for your timely response, appreciate it!