elastic / observability-docs

Elastic Observability Documentation
Other
36 stars 164 forks source link

[AI Assistant]: Document recommendations for using the Gemini connector #4410

Open dedemorton opened 1 month ago

dedemorton commented 1 month ago

Description

When we documented the Gemini connector in the Observability AI Assistante docs (https://github.com/elastic/observability-docs/pull/4143), we agreed to point users to the connector documentation as the source of truth for what is supported/required.

@lucabelluccini made the following comment indicating that we need to converge on which recommendations to make in the connector docs vs the Observability docs. This feedback requires further discussion before implementing, so I'm opening this follow-up issue. Here is the text of Luca's comment:

"As the AI Connector of Platform can be used by both O11y & Security, we should try to converge what's common in the connectors docs and recommendations for O11y in the O11y AI docs, notably:

This work will require coordination with the team that documents connectors and the security writers.

Resources

n/a

Which documentation set does this change impact?

Stateful and Serverless

Feature differences

No difference AFAIK.

What release is this request related to?

N/A

Collaboration model

The documentation team

Point of contact.

Main contact: @emma-raffenne

Stakeholders:

emma-raffenne commented 1 month ago

With the work currently done on unifying assistants across solutions, on providing "always-on" LLM and a genAI connector, we will have to revisit this part of the documentation entirely.

Nevertheless, we should distinguish between documenting our recommendations about LLMs performance (still under discussion) and documenting genAI connectors themselves which are documented in the Kibana docs and should be the single source of truth in that regard.

cc @teknogeek0

teknogeek0 commented 3 days ago

We've caught up on this more recently. A few thoughts:

  1. We should be clear that we're only in support of using our suggested LLMs at this time. We do not support self-hosted and we aren't able to confirm the functionality of LLMs that aren't in our list.2
  2. There's some minor disjointedness in how we position the supported LLMs today, we need to be even/clear on them
  3. We're holding off on any sort of "matrix" like Security has. We see less value and there's too much ambiguity in the rating system. Customers are more likely to come to us with a desired LLM to plug in(probably the one from their cloud provider if in the cloud) OR we can just tell them what we've seen the best result with.
  4. I like the guidance around limits/pay-tier and how a customer will need to think about that.
  5. On the back of this we might want to cross-link to the various LLM's own docs on performance/monitoring/troubleshooting (and/or our own docs on integrations to do o11y for these services) to complete the circle of owning them.