Closed jiaolongxue closed 2 days ago
I found a similar discussion that might be relevant to your issue: Xinference failed to access the local model, which is still open and was last updated on 2024-08-13 [1].
To resolve the 'Chat' object has no attribute 'gpt'
exception when using xinference to deploy ChatTTS in a self-hosted Docker environment, follow these steps:
Install Xinference:
pip install "xinference[all]"
Start Xinference Locally:
xinference-local
This will start Xinference with the default endpoint http://127.0.0.1:9997
. If you need to allow access from any non-local client, you can configure it with -H 0.0.0.0
.
Deploy the Model:
http://127.0.0.1:9997
in your browser.baichuan-chat
.Obtain the Model UID:
Running Models
page, e.g., 2c886330-8849-11ee-9518-43b0b8f40bea
.Configure Dify to Use the Deployed Model:
Settings > Model Providers > Xinference
in Dify.baichuan-chat
.http://<Machine_IP>:9997
(replace <Machine_IP>
with your machine's IP address).Running Models
page.Ensure Correct Network Configuration:
localhost
inside; you need to use the host IP address.By following these steps and ensuring proper network configuration, you should be able to resolve the 'Chat' object has no attribute 'gpt'
exception [2][3][4].
To continue talking to Dosu, mention @dosu.
Please open a issue in their repo.
Self Checks
Dify version
0.8.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
tts exception
use xinference depoly ChatTTS
✔️ Expected Behavior
none exception
❌ Actual Behavior
No response