openai / openai-quickstart-node

Node.js example app from the OpenAI API quickstart tutorial
https://platform.openai.com/docs/quickstart?context=node
MIT License
2.54k stars 1.99k forks source link

npm run dev successfully but responsed 500: message: "An error occurred during your request."} #78

Closed Abandon99 closed 1 year ago

Abandon99 commented 1 year ago

Describe the bug

After clicking Generate names button, http response shows: {"error":{"message":"An error occurred during your request."}}, and terminal shows: Error with OpenAI API request: connect ETIMEDOUT 199.59.148.206:443

I can use https://chat.openai.com/chat by proxy in China, so it should not be a proxy problem I think.

To Reproduce

  1. run npm dev successfully.
  2. enter an animal.
  3. click the button
  4. show the error message: {"error":{"message":"An error occurred during your request."}}

OS

No response

Node version

v18.12.1

steventsao commented 1 year ago

@Abandon99 I can't repro the issue you're having, but I also wouldn't conclude that this is not a proxy problem just because you can access chatGPT.

  1. /api/generate calls https://api.openai.com/v1/completions, whereas chatGPT calls a different subdomain at https://chat.openai.com/backend-api/{conversation, moderations}. I would not expect the same behavior from different APIs.
  2. Try "Complete" mode in OpenAI playground to see what happens to calls in the same subdomain, but different path. Playground calls https://api.openai.com/v1/engines/text-davinci-003/completions. Still, I wouldn't expect them to be the same. But if the call also errors there, it can be an IP issue.
  3. Can you try POSTing to the https://api.openai.com/v1/completions endpoint directly to minimize the layers in-between?

Sorry this isn't an official support answer, but I hope this helps you narrow down the problem. Good luck!

Abandon99 commented 1 year ago

@Abandon99 I can't repro the issue you're having, but I also wouldn't conclude that this is not a proxy problem just because you can access chatGPT.

  1. /api/generate calls https://api.openai.com/v1/completions, whereas chatGPT calls a different subdomain at https://chat.openai.com/backend-api/{conversation, moderations}. I would not expect the same behavior from different APIs.

  2. Try "Complete" mode in OpenAI playground to see what happens to calls in the same subdomain, but different path. Playground calls https://api.openai.com/v1/engines/text-davinci-003/completions. Still, I wouldn't expect them to be the same. But if the call also errors there, it can be an IP issue.
  3. Can you try POSTing to the https://api.openai.com/v1/completions endpoint directly to minimize the layers in-between?

Sorry this isn't an official support answer, but I hope this helps you narrow down the problem. Good luck!

OK, you are right, it's still a proxy problem for the terminal. when I run

export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890

before npm run dev the API runs successfully.

Abandon99 commented 1 year ago

the key tips for china users are:

lpbottle commented 1 year ago

@Abandon99 I can't repro the issue you're having, but I also wouldn't conclude that this is not a proxy problem just because you can access chatGPT.

  1. /api/generate calls https://api.openai.com/v1/completions, whereas chatGPT calls a different subdomain at https://chat.openai.com/backend-api/{conversation, moderations}. I would not expect the same behavior from different APIs.

  2. Try "Complete" mode in OpenAI playground to see what happens to calls in the same subdomain, but different path. Playground calls https://api.openai.com/v1/engines/text-davinci-003/completions. Still, I wouldn't expect them to be the same. But if the call also errors there, it can be an IP issue.
  3. Can you try POSTing to the https://api.openai.com/v1/completions endpoint directly to minimize the layers in-between?

Sorry this isn't an official support answer, but I hope this helps you narrow down the problem. Good luck!

OK, you are right, it's still a proxy problem for the terminal. when I run

export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890

before npm run dev the API runs successfully.

I tried the solution you provided. Before making any changes, the interface returned a status of 500 and the response text was 'An error occurred during your request'. After following your solution, the status changed to 400 and the response text became 'The plain HTTP request was sent to HTTPS port'. Does this mean that I need to use HTTPS protocol?

for-something commented 1 year ago

1、npm install tunnel 2、在pages\api\generate.js文件中:

const tunnel = require('tunnel');
  try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(animal),
      temperature: 0.6,
    },{
      httpsAgent: tunnel.httpsOverHttp({
                proxy: {
                          host: '127.0.0.1',
                          port: ??,
                }
      })
    });
szhw-github commented 1 year ago

1、npm install tunnel 2、在pages\api\generate.js文件中:

const tunnel = require('tunnel');
  try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(animal),
      temperature: 0.6,
    },{
      httpsAgent: tunnel.httpsOverHttp({
                proxy: {
                          host: '127.0.0.1',
                          port: ??,
                }
      })
    });

Error with OpenAI API request: tunneling socket could not be established, cause=socket hang up

well,i used the port from the VPN setting,and it was solved.

coffeeFloat commented 1 year ago

1、npm install tunnel 2、在pages\api\generate.js文件中:

const tunnel = require('tunnel');
  try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(animal),
      temperature: 0.6,
    },{
      httpsAgent: tunnel.httpsOverHttp({
                proxy: {
                          host: '127.0.0.1',
                          port: ??,
                }
      })
    });

Error with OpenAI API request: tunneling socket could not be established, cause=socket hang up

well,i used the port from the VPN setting,and it was solved. yeah! you are right!!!