Open 99bits opened 3 months ago
@99bits I was thinking about implementing this. I am planning to deploy a Huggingface space where you can just input your hosted server URL and use OmniParse through that.
Awesome, adding this feature would make Omniparse super flexible. Also deployment on HF would be great to quickly test the project.
Thanks for this amazing project.
I have an interesting use case and wondering if it's possible to do.
I would like to run the user facing Gradio app on my local machine and send the AI GPU code to an online GPU machine. This way I can chose any kind of GPU backend depending on the workload.
Cheers