# FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion Tasks
[![Conference](http://img.shields.io/badge/CVPR-2023(Highlight)-6790AC.svg)](https://cvpr.thecvf.com/)
[![Paper](http://img.shields.io/badge/Paper-arxiv.2303.02483-B31B1B.svg)](https://arxiv.org/abs/2303.02483)
Updates
- :heart_eyes: (21/03/2023) Our FAME-ViL is selected as a highlight paper at CVPR 2023! (Top 2.5% of 9155 submissions)
- :blush: (12/03/2023) Code released!
Our trained model is available at Google Drive.
Please refer to FashionViL repo for the dataset preparation.
Test on FashionIQ
python mmf_cli/run.py \
config=projects/fashionclip/configs/mtl_wa.yaml \
model=fashionclip \
datasets=fashioniq \
checkpoint.resume_file=save/backup_ckpts/fashionclip_512.pth \
run_type=test \
model_config.fashionclip.adapter_config.bottleneck=512