xorbitsai / inference

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
https://inference.readthedocs.io
Apache License 2.0
4.69k stars 369 forks source link

FEAT: UI for text2image #792

Closed aresnow1 closed 7 months ago

aresnow1 commented 8 months ago

As we support stable diffusion model for text2image, a simple UI is needed to let users generate images on web pages. Like chat UI, we can launch a gradio page for a quick implementation.

aresnow1 commented 8 months ago

@Bojun-Feng Hi, any interest? Or any suggestions we can discuss. :)

Bojun-Feng commented 8 months ago

@aresnow1 Sounds like an interesting feature! However, I am worried about locally testing the UI on my computer, as my Mac laptop has no GPU support and has only 8GB of RAM. If there are any small text2image or image2image models I can run on my laptop, I would be more than happy to implement the UI for it!

Bojun-Feng commented 8 months ago

As for suggestions, we can probably look at this open source repo.

I envision the UI to look like this Hugging Face space, except with bigger input text box.

aresnow1 commented 8 months ago

Hi, could you try this one(https://huggingface.co/stabilityai/sd-turbo ) on your laptop?

codingl2k1 commented 8 months ago

I will create a PR to add sd turbo support.

codingl2k1 commented 8 months ago

https://github.com/xorbitsai/inference/pull/797