Closed Abandon99 closed 1 year ago
@Abandon99 I can't repro the issue you're having, but I also wouldn't conclude that this is not a proxy problem just because you can access chatGPT.
/api/generate
calls https://api.openai.com/v1/completions
, whereas chatGPT calls a different subdomain at https://chat.openai.com/backend-api/{conversation, moderations}
. I would not expect the same behavior from different APIs.
https://api.openai.com/v1/engines/text-davinci-003/completions
. Still, I wouldn't expect them to be the same. But if the call also errors there, it can be an IP issue.https://api.openai.com/v1/completions
endpoint directly to minimize the layers in-between?Sorry this isn't an official support answer, but I hope this helps you narrow down the problem. Good luck!
@Abandon99 I can't repro the issue you're having, but I also wouldn't conclude that this is not a proxy problem just because you can access chatGPT.
/api/generate
callshttps://api.openai.com/v1/completions
, whereas chatGPT calls a different subdomain athttps://chat.openai.com/backend-api/{conversation, moderations}
. I would not expect the same behavior from different APIs.
- See the createCompletion method for the API call.
- Try "Complete" mode in OpenAI playground to see what happens to calls in the same subdomain, but different path. Playground calls
https://api.openai.com/v1/engines/text-davinci-003/completions
. Still, I wouldn't expect them to be the same. But if the call also errors there, it can be an IP issue.- Can you try POSTing to the
https://api.openai.com/v1/completions
endpoint directly to minimize the layers in-between?Sorry this isn't an official support answer, but I hope this helps you narrow down the problem. Good luck!
OK, you are right, it's still a proxy problem for the terminal. when I run
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
before
npm run dev
the API runs successfully.
the key tips for china users are:
process.env.OPENAI_API_KEY
.@Abandon99 I can't repro the issue you're having, but I also wouldn't conclude that this is not a proxy problem just because you can access chatGPT.
/api/generate
callshttps://api.openai.com/v1/completions
, whereas chatGPT calls a different subdomain athttps://chat.openai.com/backend-api/{conversation, moderations}
. I would not expect the same behavior from different APIs.
- See the createCompletion method for the API call.
- Try "Complete" mode in OpenAI playground to see what happens to calls in the same subdomain, but different path. Playground calls
https://api.openai.com/v1/engines/text-davinci-003/completions
. Still, I wouldn't expect them to be the same. But if the call also errors there, it can be an IP issue.- Can you try POSTing to the
https://api.openai.com/v1/completions
endpoint directly to minimize the layers in-between?Sorry this isn't an official support answer, but I hope this helps you narrow down the problem. Good luck!
OK, you are right, it's still a proxy problem for the terminal. when I run
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
before
npm run dev
the API runs successfully.
I tried the solution you provided. Before making any changes, the interface returned a status of 500 and the response text was 'An error occurred during your request'. After following your solution, the status changed to 400 and the response text became 'The plain HTTP request was sent to HTTPS port'. Does this mean that I need to use HTTPS protocol?
1、npm install tunnel 2、在pages\api\generate.js文件中:
const tunnel = require('tunnel');
try {
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt: generatePrompt(animal),
temperature: 0.6,
},{
httpsAgent: tunnel.httpsOverHttp({
proxy: {
host: '127.0.0.1',
port: ??,
}
})
});
1、npm install tunnel 2、在pages\api\generate.js文件中:
const tunnel = require('tunnel'); try { const completion = await openai.createCompletion({ model: "text-davinci-003", prompt: generatePrompt(animal), temperature: 0.6, },{ httpsAgent: tunnel.httpsOverHttp({ proxy: { host: '127.0.0.1', port: ??, } }) });
Error with OpenAI API request: tunneling socket could not be established, cause=socket hang up
well,i used the port from the VPN setting,and it was solved.
1、npm install tunnel 2、在pages\api\generate.js文件中:
const tunnel = require('tunnel'); try { const completion = await openai.createCompletion({ model: "text-davinci-003", prompt: generatePrompt(animal), temperature: 0.6, },{ httpsAgent: tunnel.httpsOverHttp({ proxy: { host: '127.0.0.1', port: ??, } }) });
Error with OpenAI API request: tunneling socket could not be established, cause=socket hang up
well,i used the port from the VPN setting,and it was solved. yeah! you are right!!!
Describe the bug
After clicking Generate names button, http response shows: {"error":{"message":"An error occurred during your request."}}, and terminal shows: Error with OpenAI API request: connect ETIMEDOUT 199.59.148.206:443
I can use https://chat.openai.com/chat by proxy in China, so it should not be a proxy problem I think.
To Reproduce
OS
No response
Node version
v18.12.1