elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.57k stars 8.09k forks source link

[Security Solution] AI Assistant Sends Invalid Request Body to Azure OpenAI Connector #188190

Open msecjmu opened 1 month ago

msecjmu commented 1 month ago

Describe the bug: The AI Assistant in Kibana is sending an invalid request body when analyzing alerts using Azure OpenAI. The connector works correctly when tested in the configuration settings, but fails during actual usage.

Kibana/Elasticsearch Stack version: 8.14.2

Steps to reproduce:

  1. Create an OpenAI Connector.
  2. Select the provider as Azure OpenAI.
  3. Enter the deployment, API version, and API key.
  4. Go to Security and send a message to the AI Assistant.

Current behavior: The request body being sent to the API contains an incorrect value in the model key. According to the Azure OpenAI documentation, the model key is not required. Here is the intercepted request body: { "model":"{api-version}", "messages":[ { "content":"You are a helpful, expert assistant who answers questions about Elastic Security. Do not answer questions unrelated to Elastic Security.\nIf you answer a question related to KQL, EQL, or ES|QL, it should be immediately usable within an Elastic Security timeline; please always format the output correctly with back ticks. Any answer provided for Query DSL should also be usable in a security timeline. This means you should only ever include the \"filter\" portion of the query.\nUse the following context to answer questions:\n\nhello", "role":"user" } ], "n":1, "stop":null, "temperature":0.2 }

Expected behavior: The request body should not include the model key, or it should be correctly populated according to the Azure OpenAI documentation.

Screenshots (if relevant): image image

elasticmachine commented 1 month ago

Pinging @elastic/security-solution (Team: SecuritySolution)

jamesspi commented 1 month ago

cc @peluja1012

@msecjmu, could you provide the API version of the Azure OpenAI service they are using? It can be found at the end of the Azure deployment URL:

?api-version=2024-02-15-preview
msecjmu commented 1 month ago

@jamesspi we're using: ?api-version=2023-09-01-preview

jamesspi commented 1 month ago

That's probably why. As we recommend 2024-05-13 in the docs -> https://www.elastic.co/guide/en/security/current/assistant-connect-to-azure-openai.html#_configure_a_model

msecjmu commented 1 month ago

Are you talking about the version of gpt-4o or the API Version? We changed the api-version in the GET parameter to api-version=2024-06-01 and we still face the same issue. Our deployment is using this gpt-4o version gpt-4o-2024-05-13.

jamesspi commented 1 month ago

Are you able to provide the full URL by any chance, as well as a screenshot of your connector setup please?

msecjmu commented 1 month ago

This is our full URL: https://hostname/aoai/openai/deployments/gpt-4o/chat/completions?api-version=2024-04-01-preview. We are routing it through our proxy which is causing the issue. However, the problem is caused by Elastic sending an invalid model type ("{api-version}"). Attack Discovery and the Connector test works. I could not find anywhere in the Microsoft docs that you can send {api-version} as a model.

jamesspi commented 1 month ago

Hi @msecjmu , where are you seeing that model type being sent? Which part of the request?

msecjmu commented 1 month ago

When I open the AI Assistant in the Security Tab in Kibana it sends this payload with the model. { "model":"{api-version}", "messages":[ { "content":"You are a helpful, expert assistant who answers questions about Elastic Security. Do not answer questions unrelated to Elastic Security.\nIf you answer a question related to KQL, EQL, or ES|QL, it should be immediately usable within an Elastic Security timeline; please always format the output correctly with back ticks. Any answer provided for Query DSL should also be usable in a security timeline. This means you should only ever include the \"filter\" portion of the query.\nUse the following context to answer questions:\n\nhello", "role":"user" } ], "n":1, "stop":null, "temperature":0.2 }