Closed QichangZheng closed 1 month ago
@QichangZheng Hello! I'm here to assist you with any bugs, questions, or contributions. Let's work together to resolve the issue you're experiencing.
I found some similar discussions and issues that might be relevant to your problem:
For your specific issue, it seems like there might be a configuration problem causing the OpenAI tokens to be consumed even when using Ollama local models. Here are a few steps you can take to troubleshoot and potentially resolve the issue:
Check Model Configuration:
API Endpoint Configuration:
host.docker.internal
to connect to the host machine. For example, if you are using LMStudio to provide local access to a model, the API address should be something like http://host.docker.internal:1234/v1
[1].Token Consumption Monitoring:
Disable OpenAI API Key:
If these steps do not resolve the issue, you might want to look into the specific logs and debugging information provided by Dify to get more insights into why the OpenAI tokens are being consumed.
To continue talking to Dosu, mention @dosu.
But when I disabled my OpenAI apikey, my Dify application still works well. Here is my application structure:
Self Checks
Dify version
0.7.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
I have just switched from FastGPT to Dify. I developed an application and use API to request it. I configured OpenAI and Ollama models. When I use Ollama local models, my OpenAI tokens were consumed! I am sure that I published the model and updated the configuration. In this case, I disabled my OpenAI apikey, and I can still use the application. Moreover, in development, for OpenAI models, I only used gpt-4o-mini, and never used gpt-4o, but it showed on the OpenAI offical API board that I consumed gpt-4o tokens instead of gpt-4o-mini.
✔️ Expected Behavior
No response
❌ Actual Behavior
No response