xorbitsai / inference

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
https://inference.readthedocs.io
Apache License 2.0
4.7k stars 368 forks source link

BUG:Embedding web ui abnormal display #632

Closed ChengjieLi28 closed 3 weeks ago

ChengjieLi28 commented 10 months ago

Note that the issue tracker is NOT the place for general support. After launching embedding model on GPU:

image
UranusSeven commented 10 months ago

You mean the address?

ChengjieLi28 commented 10 months ago

You mean the address?

image

Does embedding model have the Cached mark?

ChengjieLi28 commented 10 months ago

I cached bge-base-en-v1.5 locally. But it is not shown here.

image
UranusSeven commented 10 months ago

No, at present, the embedding model cards do not display the 'Cached' tag. While the cache status is provided by the API, modifications are required on the frontend to display this tag.

github-actions[bot] commented 1 month ago

This issue is stale because it has been open for 7 days with no activity.

github-actions[bot] commented 3 weeks ago

This issue was closed because it has been inactive for 5 days since being marked as stale.