Closed jokedud closed 1 month ago
We provide the model on the huggingface website and you need to follow the installation steps for offline inference. We don't provide the hugging face demo yet.
from @cantonalex : https://huggingface.co/spaces/alexcanton/V-Express/ from me: https://huggingface.co/spaces/faraday/V-Express (loads the required models only at the start, code slightly refactored. fixed requirements.txt a little, noticing a problem with Onnx GPU package setup missing. added in accelerate package as well)
both require GPU runtime
@jokedud You should duplicate the space and set a GPU to run it with. It costs a lot in my opinion and not suitable to apply ZeroGPU, that's why it's closed.
i tried setting it up in hugging face but didnt work, has anyone set it up there yet? or is it not out yet??