Closed pntt3011 closed 2 years ago
Many reasons:
I uploaded the fine-tuned model (tgir on FashionIQ) here: https://drive.google.com/file/d/1R8DLIHt0VazrJnZLA6jK-FfzP3m8OwFb/view?usp=share_link, just FYI.
Now you can run evaluation according to https://github.com/BrandonHanx/mmf#evaluation
Thank you for reply @BrandonHanx, I will try that model and let you know the result later.
No worries, please let me know if you have any problems.
Hi @BrandonHanx, the fine-tuned model works just as in the paper. I would like to do a fashion search engine for my college graduation thesis so your paper helps me really much.
A little off-topic, may I have your fine-tuned model for OCIR task?
You are welcome.
Sorry, I only have tgir and itr/tir fine-tuned models at hand since it has been quite a long time ago. You can fine-tune by yourself according to the instructions in README.
Thanks for considering my request, I'll close the issue now.
I uploaded the fine-tuned model (tgir on FashionIQ) here: https://drive.google.com/file/d/1R8DLIHt0VazrJnZLA6jK-FfzP3m8OwFb/view?usp=share_link, just FYI.
Now you can run evaluation according to https://github.com/BrandonHanx/mmf#evaluation
Thanks for your great work. can you propose sub-category task pretrain model. Thanks in advance.
❓ Questions and Help
In your paper, the avg recall of fixed encoder + no fine-tuned is about 30% for TGIR task. But when I run the code with your pretrained model, the result is relatively low (< 10%).
What I have done
git clone https://github.com/BrandonHanx/mmf.git cd mmf pip install --editable . cd ..
pip install wandb einops
Results
I expect the
test/fashioniq/r@k_fashioniq/avg
would be around 30%.(You can check this colab notebook for more information) Thanks you very much