iansinnott / prompta

ChatGPT UI that is keyboard-centric, mobile friendly, and searchable.
https://chat.prompta.dev
MIT License
154 stars 13 forks source link

Support accessing locally-running LLM #30

Closed struanb closed 8 months ago

struanb commented 8 months ago

I'm running a local LLM at http://127.0.0.1:1234/v1/, using LM Studio (I don't believe the details of the LLM I'm running matter).

After entering the API URL in Prompta settings, I get the "Error in stream. Failed to fetch" snackbar message, and in the dev console this error:

Access to fetch at 'http://127.0.0.1:1234/v1/chat/completions' from origin 'https://chat.prompta.dev' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
image
iansinnott commented 8 months ago

Unfortunately I'm not sure if there's anything to be done about this. I'm not sure how to allow mixed content on an HTTPS website. This issue has come up before, and the workaround, is to either use the desktop app or run prompta yourself locally.

Since you're already running the LLMs locally, maybe running prompta as well is a workable option. That way it's not served over https. The desktop app should work with localhost out of the box.

If you're aware of any ways to allow mixed content on the HTTPS site please let me know though. Closing, but feel free to reopen if there's a solution.