Algolzw / daclip-uir

[ICLR 2024] Controlling Vision-Language Models for Universal Image Restoration. 5th place in the NTIRE 2024 Restore Any Image Model in the Wild Challenge.
https://algolzw.github.io/daclip-uir
MIT License
638 stars 30 forks source link

Hosting models on Hugging Face Hub #4

Open osanseviero opened 11 months ago

osanseviero commented 11 months ago

Hi there! Thanks for sharing the code of DA-CLIP!

Would you be interested in hosting the models on the Hugging Face Hub? (hf.co/models). The current model weights are in Google Drive, which is hard for users to discover externally. By hosting the models on the Hub, you can document them with model cards, get download stats, and can use programmatic access to download the models directly without the users having to download them manually. That would also simplify the process of training and evaluation. Here is a guide in case you're interested

Algolzw commented 11 months ago

Hi! Thanks for your message. I will try to make a model card and document it on Hugging Face Hub. BTW, I was surprised to find that someone had already shared a Hugging Face demo here. Thanks for his/her great work!!!

osanseviero commented 11 months ago

Oh nice! Yes, that's a demo from the amazing @fffiloni!

Let me know if you have any questions about uploading the model.