bigscience-workshop / petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
8.89k stars 489 forks source link

text to video generation models ? #529

Open scenaristeur opened 8 months ago

scenaristeur commented 8 months ago

I've seen that you don't want to host StableDiffusion models at https://github.com/bigscience-workshop/petals/issues/519

We are currently developping a chat game based on chat LLMs https://scenaristeur.github.io/numerai/ that use "The horde" for now for chat/text generation and image génération, but we could potentially be interested with video generation. Do you think something like that could be hosted https://huggingface.co/damo-vilab/text-to-video-ms-1.7b ?

borzunov commented 8 months ago

Hi @scenaristeur,

How much GPU memory does this model take in total? At first sight, it seems that this model requires < 8-10 GB and fits many consumer GPUs, so there's not much sense in using Petals for it. I mean that you can technically do this, but AI Horde may be a better fit (optionally, you can use the horde for different model components separately).

scenaristeur commented 8 months ago

In fact i don't have a gpu myself, and my app is a webapp working in the browser. My goal is to connect this webapp to decentralized llms, and image or video generation from the browser, or from mobile app. This is for accessing decentralized models from cpu/brosser, or mobile app