k8sgpt-ai / k8sgpt

Giving Kubernetes Superpowers to everyone
http://k8sgpt.ai
Apache License 2.0
5.96k stars 693 forks source link

[Feature]: Add a custom AI that can call rest API endpoint #990

Open lili-wan opened 9 months ago

lili-wan commented 9 months ago

Checklist

Is this feature request related to a problem?

Yes

Problem Description

Our company developed another platform on top of openAI for compliance and security reason. And we can only use the internal API (similar API signature like openAI). Currently the openAI integration directly calls CreateChatCompletion as go-client code which does not work for our use case

Solution Description

Can we have a AI option that can call AI API as rest point? It will take the following parameters as configurable input from CLI/API

  1. AI endpoint
  2. Request payload body (json string)
  3. Authentication header

Benefits

this provide more flexibility on the client side to configure the rest endpoint directly

Potential Drawbacks

No response

Additional Information

No response

AlexsJones commented 8 months ago

Hi @lili-wan thanks for creating this issue. If you're using an openAI like API you can configure an alternative endpoint e.g.

bash k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Would this work for you? Or do you have a completely different API?

warjiang commented 8 months ago

Hi @lili-wan thanks for creating this issue. If you're using an openAI like API you can configure an alternative endpoint e.g.

bash k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Would this work for you? Or do you have a completely different API?

Is it necessary to provide a localai sample project for user to make integration with their self hosted llm? Just a suggestion. @AlexsJones

AlexsJones commented 8 months ago

Hi @lili-wan thanks for creating this issue. If you're using an openAI like API you can configure an alternative endpoint e.g.

bash k8sgpt auth new --backend localai --model <model_name> --baseurl http://localhost:8080/v1

Would this work for you? Or do you have a completely different API?

Is it necessary to provide a localai sample project for user to make integration with their self hosted llm? Just a suggestion. @AlexsJones

There are lots of tutorials on how to use k8sgpt/localAI. The question I have is regarding @lili-wan use case for a generic restful API.

remmen-io commented 6 months ago

I've tried to setup the localai to point to a local endpoint made with huggingface tgi.

 k8sgpt auth update localai --model tgi --baseurl https://deepseek.k8scluster.ch/v1

but I get a

➜ k8sgpt analyze -b localai --explain
   0% |                                                                                                              | (0/17, 0 it/hr) [0s:0s]
Error: failed while calling AI provider localai: error, status code: 422, message:

Running a test query although works fine

curl https://deepseek.k8scluster.ch/v1/chat/completions \
    -X POST \
    -d '{
  "model": "tgi",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "What is deep learning?"
    }
  ],
  "stream": true,
  "max_tokens": 100
}' \
    -H 'Content-Type: application/json'