k8sgpt-ai / k8sgpt

Giving Kubernetes Superpowers to everyone
http://k8sgpt.ai
Apache License 2.0
5.22k stars 596 forks source link

[BUG]: Update OpenAI API Key failed #1169

Open gyliu513 opened 2 weeks ago

gyliu513 commented 2 weeks ago

Checklist

Affected Components

K8sGPT Version

v0.3.37

Kubernetes Version

v1.21

Host OS and its Version

No response

Steps to reproduce

  1. Auth with an invalid OpenAI API Key
  2. Auth again with a valid OpenAI API Key

Expected behaviour

The API Key should able to be updated

Actual behaviour

The OpenAI API Key was not updated

Additional Information

No response

gyliu513 commented 2 weeks ago
root@gyliu-dev21:~# k8sgpt  analyze --explain
   0% |                                                                                                                                                                      | (0/19, 0 it/hr) [0s:0s]
Error: failed while calling AI provider openai: error, status code: 401, message: Incorrect API key provided: sk-SXt4j***************************************vGUc. You can find your API key at https://platform.openai.com/account/api-keys.

My new OpenAI API Key was started with sk-LLZg, but it did not applied to the auth after update

AlexsJones commented 2 weeks ago

That message comes directly from OpenAI and is not something we have any influence over.

gyliu513 commented 2 weeks ago

@AlexsJones this is not the issue that I was reporting, if you take a look at my reproduce process as below:

I assume the k8sgpt analyze --explain should be succeed, but not failed as I was using a correct API key at the second time.

AlexsJones commented 1 week ago

Can you show the commands you used?

gyliu513 commented 1 week ago

Run k8sgpt auth add twice, use a invalid api key for the first time and use a valid key for second time. You will get some error when running k8sgpt analyze --explain

root@dev21:~# k8sgpt auth add
Warning: backend input is empty, will use the default value: openai
Warning: model input is empty, will use the default value: gpt-3.5-turbo
Enter openai Key:
AlexsJones commented 1 week ago

I was able to reproduce this

k8sgpt on  main via 🐹 v1.22.4 on ☁️  (eu-west-2)
❯ k8sgpt auth add
Warning: backend input is empty, will use the default value: openai
Warning: model input is empty, will use the default value: gpt-3.5-turbo
Enter openai Key: openai added to the AI backend provider list

k8sgpt on  main via 🐹 v1.22.4 on ☁️  (eu-west-2)
❯ cat ~/Library/Application\ Support/k8sgpt/k8sgpt.yaml
ai:
    providers:
        - name: openai
          model: gpt-3.5-turbo
          password: A
          temperature: 0.7
          topp: 0.5
          topk: 50
          maxtokens: 2048
    defaultprovider: ""
kubeconfig: ""
kubecontext: ""

k8sgpt on  main via 🐹 v1.22.4 on ☁️  (eu-west-2)
❯ k8sgpt auth add
Warning: backend input is empty, will use the default value: openai
Warning: model input is empty, will use the default value: gpt-3.5-turbo
Enter openai Key: openai added to the AI backend provider list

k8sgpt on  main via 🐹 v1.22.4 on ☁️  (eu-west-2)
❯ cat ~/Library/Application\ Support/k8sgpt/k8sgpt.yaml
ai:
    providers:
        - name: openai
          model: gpt-3.5-turbo
          password: A
          temperature: 0.7
          topp: 0.5
          topk: 50
          maxtokens: 2048
        - name: openai
          model: gpt-3.5-turbo
          password: B
          temperature: 0.7
          topp: 0.5
          topk: 50
          maxtokens: 2048
    defaultprovider: ""
kubeconfig: ""
kubecontext: ""
AlexsJones commented 1 week ago

I will get this fixed @gyliu513

gyliu513 commented 1 week ago

Thanks @AlexsJones , I have same question here, does k8sgpt need to persist two openai configurations here? Or we just need to keep one?

providers:
        - name: openai
          model: gpt-3.5-turbo
          password: A
          temperature: 0.7
          topp: 0.5
          topk: 50
          maxtokens: 2048
        - name: openai
          model: gpt-3.5-turbo
          password: B
          temperature: 0.7
          topp: 0.5
          topk: 50
          maxtokens: 2048
    defaultprovider: ""