Closed edisonzf2020 closed 4 months ago
To address the issue with the "mxbai-rerank-large-v1" model not showing up in the Dify system after adding it to your self-hosted LocalAI, please ensure the following steps have been correctly executed:
models
directory and a correct YAML configuration file is created for it within the same directory.Settings > Model Providers > LocalAI
and add the new model with its details to ensure it's integrated properly.Check for these potential issues:
If the model still doesn't appear, review the integration process in Dify, especially the updateModelList
function, to ensure the model list is refreshed correctly. Also, look out for any error messages during the process, as they can provide specific clues to the problem.
For a detailed guide on adding models to LocalAI and troubleshooting, refer to the LocalAI model configuration documentation.
If after these checks the issue persists, please share more details about the steps you've taken and any error messages encountered for further assistance.
To continue the conversation, mention @dosu.
After configuring LocalAI ReRank, running SELECT * FROM provider_models
on the Dify DB returns the following result, indicating that it is configured correctly.
... | ... | localai | <<model_name>> | reranking | {"server_url": "http://xxx.xxx.xxx.xxx/v1"} | t | 2024-05-17 17:10:23 | 2024-05-23 07:26:34.66699
However, it does not show up in the model provider settings and cannot be used as a model.
It seems like the API endpoint console/api/workspaces/current/models/model-types/rerank
is not functioning with the registered LocalAI Reranker.
Request: http://xxx/console/api/workspaces/current/models/model-types/rerank
Response:
{data: [{provider: "nvidia", label: {zh_Hans: "API Catalog", en_US: "API Catalog"},…},…]}
0: {provider: "nvidia", label: {zh_Hans: "API Catalog", en_US: "API Catalog"},…}
1: {provider: "cohere", label: {zh_Hans: "Cohere", en_US: "Cohere"},…}
2: {provider: "jina", label: {zh_Hans: "Jina", en_US: "Jina"},…}
Environment:
@edisonzf2020 : Could you please point me out how you managed to install the reranking model mxbai-rerank-large-v1
in LocalAI?
@gakugaku : Which reranking model did you install in LocalAI and how did you manage it? Thank you in advance.
@Sarmingsteiner This issue has been resolved in #4652, and it should work correctly in the latest version.
Here is how I managed to install a reranking model in LocalAI. Example for Docker and Japanese Reranker model.
compose.yaml
services:
local-ai:
image: localai/localai:v2.16.0
environment:
MODELS_PATH: /models
volumes:
- ./data/LocalAI/models:/models:cached
./data/LocalAI/models/reranker.yaml
You can find more details in the 📈 Reranker | LocalAI documentation.
name: japanese-reranker
backend: rerankers
parameters:
model: hotchpotch/japanese-reranker-cross-encoder-small-v1
When you start and run this configuration, the model will be downloaded automatically.
Select the LocalAI provider and configure it as follows:
japanese-reranker
http://<your-server>/v1
(make sure to include /v1
)
Self Checks
Dify version
0.6.8
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
✔️ Expected Behavior
No response
❌ Actual Behavior
No response