Open dedemorton opened 1 month ago
With the work currently done on unifying assistants across solutions, on providing "always-on" LLM and a genAI connector, we will have to revisit this part of the documentation entirely.
Nevertheless, we should distinguish between documenting our recommendations about LLMs performance (still under discussion) and documenting genAI connectors themselves which are documented in the Kibana docs and should be the single source of truth in that regard.
cc @teknogeek0
We've caught up on this more recently. A few thoughts:
Description
When we documented the Gemini connector in the Observability AI Assistante docs (https://github.com/elastic/observability-docs/pull/4143), we agreed to point users to the connector documentation as the source of truth for what is supported/required.
@lucabelluccini made the following comment indicating that we need to converge on which recommendations to make in the connector docs vs the Observability docs. This feedback requires further discussion before implementing, so I'm opening this follow-up issue. Here is the text of Luca's comment:
"As the AI Connector of Platform can be used by both O11y & Security, we should try to converge what's common in the connectors docs and recommendations for O11y in the O11y AI docs, notably:
This work will require coordination with the team that documents connectors and the security writers.
Resources
n/a
Which documentation set does this change impact?
Stateful and Serverless
Feature differences
No difference AFAIK.
What release is this request related to?
N/A
Collaboration model
The documentation team
Point of contact.
Main contact: @emma-raffenne
Stakeholders: