elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.47k stars 8.04k forks source link

[AI Assistant for Observability] Better communication surrounding ML node usage #179724

Open CoenWarmer opened 3 months ago

CoenWarmer commented 3 months ago

Summary

Currently it is not always clear under which scenarios ML nodes are being spun up when using the assistant, and if there are any costs associated with that. This has lead to certain customers experiencing unforeseen costs which is something we want to prevent.

Attempt to understand offerings vis a vis ML nodes

Serverless The ELSER model is always available. ML nodes provisioning is done automatically for the user. 'Bill shock' can not happen in this offering.

Cloud / Stateful When autoscaling is enabled and the ELSER model is not yet installed, the AI Assistant automatically requests the use of the ELSER model, and thus provisions a ML node.

When autoscaling is not enabled and the ELSER model is not yet installed, the AI Assistant will return an error message in the UI.

When the ELSER model is installed, the AI Assistant will not show anything in the UI.

Acceptance criteria

grabowskit commented 2 months ago

@boriskirov - can you help with designs for different scenarios

grabowskit commented 2 months ago

Need help from ML team to implement