Closed tyrcho closed 2 months ago
This looks very good, thank you very much!
in my company we use a proxy to OPEN_AI
Interesting, I haven't heard of that. Mind if I ask what proxy service this is? You don't need to tell me the URL or some other private information, I'm just curious about such a service :O
Interesting, I haven't heard of that. Mind if I ask what proxy service this is? You don't need to tell me the URL or some other private information, I'm just curious about such a service :O
I don't know the details (datadog is a pretty large company), but basically we connect to an internal http server with our email address as the passkey and it transparently redirects the requests to the external openai server, using the account of the company.
Update : it seems we are maintaining internally a private fork of https://github.com/PawanOsman/ChatGPT
chatgpt-cli has a similar configuration (url)
the prompt is written on several lines so it's easier to update further
the prompt asks the AI to provide a comment on the line (could be done as an option). The generated answers are still executable as is !
the prompt is evaluated on each request and thus updated with the current folder
the user input, when on several lines, is joined with semicolumns (;) so it does not break the json
the OPEN_AI_URL is customizable (in my company we use a proxy to OPEN_AI)
I tried to fix the system identification command
Your system is $(ls "/etc/*-release" 2> /dev/null | xargs cat | xargs | sed 's/ /,/g').
so it does not break on macos, but I did not test it on a linux system (update : I used the code from https://github.com/Myzel394/zsh-copilot/pull/2 )