Closed bexem closed 1 month ago
I think I've made some progress with the code by myself, I hope it actually helps rather just confuse you even more. The custom OpenAI configuration can now be added successfully. However, I've encountered an unexpected issue during testing.
The problem appears to stem from KoboldCPP using a different API endpoint for image description than standard OpenAI implementations. While the configuration works smoothly with OpenWebUI (which I had assumed used the same OpenAI endpoint and completions), KoboldCPP seems to diverge from this standard. At least this is what I think, I might be completely wrong and I'm happy to corrected.
When attempting to use the service/action, I receive an error indicating that the model doesn't exist. To investigate further, I added more debugging lines to the custom component, including code to fetch and log available models. The component appears to parse the models correctly (detecting just one in my case), but curiously, no interaction was logged by KoboldCPP on its end.
import urllib.parse
async def custom_openai(self):
self._validate_provider()
try:
url = self.user_input[CONF_CUSTOM_OPENAI_ENDPOINT]
parsed = urllib.parse.urlparse(url)
protocol = parsed.scheme
base_url = parsed.hostname
port = f":{parsed.port}" if parsed.port else ""
# Use the path from the input URL if it exists, otherwise use "/v1"
path = parsed.path if parsed.path and parsed.path != "/" else "/v1"
# Ensure the path ends with "/models"
if not path.endswith("/models"):
path = path.rstrip("/") + "/models"
endpoint = path
header = {
'Content-type': 'application/json',
'Authorization': f'Bearer {self.user_input[CONF_CUSTOM_OPENAI_API_KEY]}'
}
_LOGGER.debug(
f"Connecting to: [protocol: {protocol}, base_url: {base_url}, port: {port}, endpoint: {endpoint}]")
except Exception as e:
_LOGGER.error(f"Could not parse endpoint: {e}")
raise ServiceValidationError("endpoint_parse_failed")
if not await self._handshake(base_url=base_url, port=port, protocol=protocol, endpoint=endpoint, header=header):
_LOGGER.error("Could not connect to Custom OpenAI server.")
raise ServiceValidationError("handshake_failed")
Thanks for the detailed error report and the code! I'll look into this
Your parsing is much cleaner than the mess it was before, so using this for now. Connecting to the server works now, but making the request might still fail (though I tried this with open webui which doesn't state that it actually implements openai's endpoints).
I'll release a beta so that you can test it. Any feedback is welcome!
I'm happy it worked! My brain suddenly remembered when I woke up that I hadn't included the filename for the code 😅
I've tested the beta and I managed to add my koboldcpp (thank you!) but still erroring when trying to use the service as it says the model doesn't exist.
I'm sure it has something to do with the openai's endpoint used by koboldcpp itself. Unfortunately I'll be working the next two nights so I'm not gonna be of much help.
Anyways, thank you so much!
Thanks for testing! I got the same error for open webui. It's actually not that it doesn't find the model. It just received a 404 error (not found) and usually this indicates that the model doesn't exist. In this case however I'm pretty sure it's the endpoint that doesn't exist.
I will try this tomorrow with LocalAI just to be sure, as I know that its API implements the same endpoints as OpenAI.
Closing this for now. Feel free to reopen if you need more help!
Bug Description
When configuring a Custom OpenAI provider in Home Assistant's LLMVision integration, the system is incorrectly parsing the provided endpoint URL. It's adding an extra colon (":") after the port number, which is causing connection failures. This occurs even when the input URL is correctly formatted.
Version:
Attempted configuration
Logs
Additional context