Closed NimaBoscarino closed 1 year ago
Hi there, thank you for your kind words and the suggestion! As an author, I am interested and would be honored to publish a demo on Hugging Face! I will discuss it with my co-authors.
In the mean time, I have some curiosities about HuggingFace demos. Since our CLIP-Actor is not a learning-based method (it is an optimization method that takes about 30min~1hour), is it still possible to publish a user-demo? I am kind of worried how the demo should work on the web. Hope I can hear some workaround or solution for it.
I will leave the comment ASAP after the meeting with co-authors. Thank you for the kind words, again!
That's a great question! I'll check with the team and I'll get back to you on that ASAP π
So an option could be to make a space that showcases the results without actually running any predictions. This space here does that, for example: https://huggingface.co/spaces/weizmannscience/text2live
Great! Thanks for your help! We would like to add a demo, similar to the example you gave here. I will finalize which samples to showcase by discussing with my co-authors.
In the meantime, can you give me what are the things we should prepare? (e.g., video formats)
Fantastic! Here are some things you can do:
And that should be it βΒ let me know if you run into any issues!
Hey there! Congrats on being accepted to ECCV 2022, this is an amazing project! Would there be any interest in publishing a demo on Hugging Face Spaces? I'm sure our users and the broader ML community would love to play around with this. The demo for ICON was really well received, and I imagine setting this up on Hugging Face would be pretty similar. I'm more than happy to guide you through it, and if you're interested I can also put in a request internally to assign a GPU to your demo.
Let me know if this is something that sounds interesting to you π