Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
A large number request of model that not in the model list will cause the service not responsive.
To Reproduce
When the model set in immersivetranslate is not running. Then translate a page with a lot of content.
xinference docker version: 0.12.2
errors:
2024-06-28 10:34:06,169 xinference.api.restful_api 1 ERROR [address=0.0.0.0:59492, pid=95] Model not found in the model list, uid: qwen2-instruct
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/xinference/api/restful_api.py", line 1400, in create_chat_completion
model = await (await self._get_supervisor_ref()).get_model(model_uid)
File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/context.py", line 227, in send
return self._process_result_message(result)
File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/pool.py", line 659, in send
result = await self._run_coro(message.message_id, coro)
File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/pool.py", line 370, in _run_coro
return await coro
File "/opt/conda/lib/python3.10/site-packages/xoscar/api.py", line 384, in __on_receive__
return await super().__on_receive__(message) # type: ignore
File "xoscar/core.pyx", line 558, in __on_receive__
raise ex
File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__
async with self._lock:
File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
result = await result
File "/opt/conda/lib/python3.10/site-packages/xinference/core/utils.py", line 45, in wrapped
ret = await func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/xinference/core/supervisor.py", line 934, in get_model
raise ValueError(f"Model not found in the model list, uid: {model_uid}")
ValueError: [address=0.0.0.0:59492, pid=95] Model not found in the model list, uid: qwen2-instruct
Describe the bug
A large number request of model that not in the model list will cause the service not responsive.
To Reproduce
When the model set in immersivetranslate is not running. Then translate a page with a lot of content.
Expected behavior
Throw errors and not freeze.
Additional context
Add any other context about the problem here.