Closed sekulicd closed 11 months ago
This seems to be drives from your local IP of your home network?
Yes that should have been localhost and not using ifconfig
this seems connected to https://github.com/premAI-io/prem-app/issues/474
Testing docker locally, you can see that baseUrl
received from the daemon is wrong. This is my public IP.
I think that is kinda expected, so we need to think if we want to have a special rule for localhost eventually in the frontend or daemon. This is only for development after all
I have a different result instead @Janaka-Steph using https://demo.prem.ninja
The Service Details page shows correctly the baseUrl for LLama service
But then the playground chat calls the wrong baseUrl ie.
Can you check TanStack devtool ?
I am on released on demo.prem.ninja
, is there a way to do from there? or build the app with Node.js directly on the server?
I am making a new Docker release using latest main, let's see if it fixes
We can add devtool on production, hidden and lazy loaded, triggered by a command in browser console
The front just use service.baseUrl here
But baseUrl in the service detail page seems correct
Fixed using the latest main!
Surprise!
The current base URL used for chat/completion requests is pointing to an incorrect host(62.4.58.131), causing a failure in request processing.
http://62.4.58.131/gpt4all-lora-q4/v1/chat/completions
@tiero @Janaka-Steph