Christopher-Hayes / vscode-chatgpt-reborn

Refactor, improve, and debug your code in VSCode with GPT-3 and GPT-4.
https://marketplace.visualstudio.com/items?itemName=chris-hayes.chatgpt-reborn
ISC License
207 stars 38 forks source link

The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. #36

Closed zzy-life closed 1 month ago

zzy-life commented 1 year ago

Describe the Bug

The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (If there is a response, the question will also report an error 401 exception. You need to restart vscode to use it normally.

Where are you running VSCode? (Optional)

None

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

Christopher-Hayes commented 1 year ago

Sorry, missed this issue. I'll investigate.

Cytranics commented 1 year ago

Chris man can you just remove the entire paste your API and running a status check on the models. Everyone has the same models. DOing this will open the door to allowing azure, but man you've really locked things down for no need.

Christopher-Hayes commented 1 year ago

Chris man can you just remove the entire paste your API and running a status check on the models. Everyone has the same models. DOing this will open the door to allowing azure, but man you've really locked things down for no need.

I'll reconsider how the code works.

This was done to show what models could be used. For a while this was needed for gpt-4, still to some degree with gpt-4-32k which is limited access, but exceedingly expensive if actually used.

Ideally, this is fixed without removing model availability functionality. I'm not in a rush to add support for new APIs if it degrades the experience for the bulk of users.

Cytranics commented 1 year ago

Fair enough, how about a bypass button? Also, allow us to change the prompt, and I”ll send ya $500 cash.

Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows

From: Chris @.> Sent: Sunday, July 2, 2023 10:03 PM To: @.> Cc: Cory @.>; @.> Subject: Re: [Christopher-Hayes/vscode-chatgpt-reborn] The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (Issue #36)

Chris man can you just remove the entire paste your API and running a status check on the models. Everyone has the same models. DOing this will open the door to allowing azure, but man you've really locked things down for no need.

I'll reconsider how the code works.

This was done to show what models could be used. For a while this was needed for gpt-4, to some degree with gpt-4-32k which is limited access, but exceedingly expensive if actually used.

Ideally, this is fixed without removing model availability functionality. I'm not in a rush to add support for new APIs if it degrades the experience for the bulk of users.

— Reply to this email directly, view it on GitHubhttps://github.com/Christopher-Hayes/vscode-chatgpt-reborn/issues/36#issuecomment-1617111262, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A5RL5MF6FKMRHE2W6HGH6ITXOIR6HANCNFSM6AAAAAAYUXBSDY. You are receiving this because you commented.Message ID: @.***>

Christopher-Hayes commented 1 year ago

@Cytranics I can add a bypass button short-term. I think the root issue here is that updates to apiBaseUrl put the extension in a broken state until restarted, which should be fixable.

In reference to the prompt - that's already changeable. OpenAI calls this the "system context" which is what you'll find in the extension settings. I know that's not evident to most users, I might need to update that setting to include the word "prompt".

Btw, that's very generous, I don't do FOSS full-time, so I couldn't accept that in good conscience.

Christopher-Hayes commented 1 year ago

For example: image image

zzy-life commented 1 year ago

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

Christopher-Hayes commented 1 year ago

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

zzy-life commented 1 year ago

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

Sorry, didn't notice that 16k is already supported, but switching context mode is also necessary, because sometimes GPT does not need to carry memory.

Christopher-Hayes commented 1 year ago

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

Sorry, didn't notice that 16k is already supported, but switching context mode is also necessary, because sometimes GPT does not need to carry memory.

Could you elaborate on "switching context mode is also necessary, because sometimes GPT does not need to carry memory."? I'm not sure if I'm 100% understanding. What do you mean by "switch context mode"? If you're referring to changing the "system context", as I mentioned above, it's already supported in the extension. I can work on making it easier to find, but it is there in the extension settings under "System Context".

zzy-life commented 1 year ago

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

Sorry, didn't notice that 16k is already supported, but switching context mode is also necessary, because sometimes GPT does not need to carry memory.

Could you elaborate on "switching context mode is also necessary, because sometimes GPT does not need to carry memory."? I'm not sure if I'm 100% understanding. What do you mean by "switch context mode"? If you're referring to changing the "system context", as I mentioned above, it's already supported in the extension. I can work on making it easier to find, but it is there in the extension settings under "System Context".

For example, if the context is enabled by default, it will send the entire conversation and consume more tokens. Sometimes I don’t need to send the context record, so I click the button to close it to save tokens.

Christopher-Hayes commented 1 year ago

@zzy-life ah I see, yeah, right now there's just the "clear" button, or closing the chat. I'll look at a way to incorporate something into the UI.

zzy-life commented 1 year ago

@zzy-life ah I see, yeah, right now there's just the "clear" button, or closing the chat. I'll look at a way to incorporate something into the UI.

Because I have to click the clear button every time I use it, it is a little troublesome. If you have time you can think about it, but it's not an urgently needed feature

Christopher-Hayes commented 1 year ago

To provide an update, I'll have a fix coming soon for proxy APIs. It will allow API URL setup at the same time as API Key setup. It will also fix issues with updating the api url and having to restart the extension. This will only be a solve for proxies of the OpenAI API.

I looked more into Azure's OpenAI service. This won't be a fix for Azure, Azure does some things differently that make it not quite a drop-in replacement. Since we already have an issue open for Azure API support, I've included more info over there: #28

Cytranics commented 1 year ago

If I have time this week I'll do a push with your code for azure. I just spent so much time reverse engineering genie because they closed source it. I added system prompt changing, full azure support.

I took a look at your code and it was pretty wildly different. That's why I didn't really get involved. But I got some free time out. Perhaps update yours

Sent from Ninehttp://www.9folders.com/


From: Chris Hayes @.***> Sent: Monday, July 3, 2023 2:39 AM To: Christopher-Hayes/vscode-chatgpt-reborn Cc: Cory Coddington; Mention Subject: Re: [Christopher-Hayes/vscode-chatgpt-reborn] The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (Issue #36)

To provide an update, I'll have a fix coming soon for proxy APIs. It will allow API URL setup at the same time as API Key setup. It will also fix issues with updating the api url and having to restart the extension. This will only be a solve for proxies of the OpenAI API.

I looked more into Azure's OpenAI service. This won't be a fix for Azure, Azure does some things differently that make it not quite a drop-in replacement. Since we already have an issue open for Azure API support, but I've included more info over there: #28https://github.com/Christopher-Hayes/vscode-chatgpt-reborn/issues/28

— Reply to this email directly, view it on GitHubhttps://github.com/Christopher-Hayes/vscode-chatgpt-reborn/issues/36#issuecomment-1617452262, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A5RL5MDTKY2IPHAHHLV5TFDXOJSJ7ANCNFSM6AAAAAAYUXBSDY. You are receiving this because you were mentioned.Message ID: @.***>

Christopher-Hayes commented 1 year ago

If I have time this week I'll do a push with your code for azure. I just spent so much time reverse engineering genie because they closed source it. I added system prompt changing, full azure support. I took a look at your code and it was pretty wildly different. That's why I didn't really get involved. But I got some free time out. Perhaps update yours Sent from Ninehttp://www.9folders.com/ ____ From: Chris Hayes @.> Sent: Monday, July 3, 2023 2:39 AM To: Christopher-Hayes/vscode-chatgpt-reborn Cc: Cory Coddington; Mention Subject: Re: [Christopher-Hayes/vscode-chatgpt-reborn] The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (Issue #36) To provide an update, I'll have a fix coming soon for proxy APIs. It will allow API URL setup at the same time as API Key setup. It will also fix issues with updating the api url and having to restart the extension. This will only be a solve for proxies of the OpenAI API. I looked more into Azure's OpenAI service. This won't be a fix for Azure, Azure does some things differently that make it not quite a drop-in replacement. Since we already have an issue open for Azure API support, but I've included more info over there: #28<#28> — Reply to this email directly, view it on GitHub<#36 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A5RL5MDTKY2IPHAHHLV5TFDXOJSJ7ANCNFSM6AAAAAAYUXBSDY. You are receiving this because you were mentioned.Message ID: @.>

Sure, I'd appreciate that.

To help you - it seems like for Azure support we'll want to swap the "OpenAI" library in api-provider.ts to use Azure's version, which supports both Azure's modified API and OpenAI's regular API. Azure calls their deployments "engines" and sends that to their API, so introducing an "engine" type to this extension in some way might be needed. In a perfect world, Azure users would see an "engine" dropdown in place of a "model" dropdown.

Some relevant files:

Christopher-Hayes commented 11 months ago

In the latest release, v3.19.0, it will now give the user a way to set the apiBaseUrl at the setup screen. It should also just allow you to change the apiBaseUrl at any time without needing to restart vscode.

Before I close this issue as completed, @zzy-life can you confirm the bug you saw is now fixed?

In relation to the Azure API discussion here, that will continue in #28. With some modifications + the extension switching to Vercel's AI package, we'll soon be able to support a number of non-OpenAI models.