Closed mshustov closed 2 years ago
Pinging @elastic/kibana-core (Team:Core)
@jtibshirani @imotov I have a couple of questions about the integration with ES slow query logs:
x-opaque-id
or stats
and switch to the standard observability headers (baggage, for example) later.{ "x-opaque-id": "{ \"id\": ....}
But I'm open to any suggestions. Right now, Kibana passes x-opaque-id
value as uuid string x-opaque-id: 6c4e0436-86d7-4c55-bb21-e522a5afc0f2
x-opaque-id
header into the slow logs and no additional work is required from your side?I don't have information on the slow logs, but the recently-added HTTP client stats in ES report the first observed x-opaque-id
for each HTTP client. Additional context from Kibana would be great and if implemented in the x-opaque-id
field as proposed here, the HTTP client stats in ES should be changed to report the most recently observed ID rather than just the first observed id.
Could you confirm that Elasticsearch includes x-opaque-id header into the slow logs and no additional work is required from your side?
Yes, since https://github.com/elastic/elasticsearch/pull/31539
In the next iteration, we will use the APM RUM agent for content propagation to get rid of the custom 'kbn-context' header.
Adding a bit of background why we've decided to not use the RUM agent for that particular part.
This would require support for baggage, and either the ability to manually inject headers (https://github.com/elastic/apm-agent-rum-js/issues/468) or a context management API (https://github.com/elastic/apm-agent-rum-js/issues/1040). As that's a lot of dependencies and a non-trivial amount of work for the RUM agent, I think it's easier to manually propagate context from the Kibana frontend to the Kibana backend using a custom header. An important factor is that this custom header is contained within Kibana (frontend -> backend). This means it's an internal implementation detail that can change later on. The custom Kibana context will not be sent to Elasticsearch.
Whenever Kibana requests Elasticsearch server, Kibana adds the kibanaContext label to x-opaque-id header. It allows Stack users to identify the source of a query in slowlogs without the necessity to inspect Kibana logs.
I'm ok with that but I hope we can view that as a stretch goal. One thing that we might want to discuss is whether we even want the labels
to store data when tracing is turned off in the Node.js agent vs labels
acting as a noop in that setting. In the future, we probably want to remove X-Opaque-Id
completely in favor of traceparent
and baggage
.
One thing that we might want to discuss is whether we even want the labels to store data when tracing is turned off in the Node.js agent vs labels acting as a noop in that setting.
@felixbarny We aren't turning off agent tracing here, though, are we? We are just not sending trace data on to an APM server.
No we wouldn’t turn off tracing completely when setting disable_send=true
. We'd work in a mode that's similar to 0% sampling where we may want to noop some things. For example not storing labels, not collecting the ES query and other things that reduce memory and runtime overhead. If we expose getters for labels
, that may not be possible anymore. OTel doesn't expose getters for their attributes for that reason.
The semantics for baggage
are defined differently. IINM, you can set and get values no matter the sampling decision.
Have we looked at "higher-level" ways of passing the execution context, rather than just on specific requests? For alerting and action tasks, we provide the task with an es client, and that would be a place we could add an execution context to be associated with all the calls made with it - no changes to actually es call sites within all the rule/action types would be required. Or maybe this is something we could do already with the existing es client?
Wondering if it would be possible to associate multiple "things" with a request. For example, for an alerting rule execution, it might be nice to mark a request as "from alerting" and then also "from rule type XYZ", and then you could even imagine a rule type adding additional "markers" to differentiate multiple requests it's making.
We'll definitely be wanting to associate es queries with specific rule types, but I'm curious - once we start collecting this data - if it would also be useful to see requests in the scope of "all alerting uses". Without having to add up a bunch of numbers ourselves.
Have we looked at "higher-level" ways of passing the execution context, rather than just on specific requests? we provide the task with an es client, and that would be a place we could add an execution context to be associated with all the calls made with it
@pmuellr in https://github.com/elastic/kibana/pull/107523 I added withContext
wrapper https://github.com/elastic/kibana/blob/27aca6cdd19d5a8a335ab9b4433ec76a1fa3c7a4/src/core/server/execution_context/execution_context_service.ts#L59-L63
I'd expect we wrap task.run
with withContext
to provide a task-specific context https://github.com/elastic/kibana/blob/27aca6cdd19d5a8a335ab9b4433ec76a1fa3c7a4/x-pack/plugins/task_manager/server/task_running/task_runner.ts#L264
Or maybe this is something we could do already with the existing es client?
The Core will inject context
details in the ES client calls automatically. What we need from the alerting plugin is to provide the context data with `withContext wrapper.
Wondering if it would be possible to associate multiple "things" with a request. For example, for an alerting rule execution, it might be nice to mark a request as "from alerting" and then also "from rule type XYZ", and then you could even imagine a rule type adding additional "markers" to differentiate multiple requests it's making.
That makes sense. withContext
is similar to withSpan
here: it creates a nested record, so we alerting
can specify the details of the context for some operations:
// ctx: undefined
withContext({ type: 'a' }, () => { // ctx: {type: 'a'}
// ...
withContext({ type: 'b' }, () => { // ctx: {type: 'b', parent: {type: 'a'}}
});
}); // ctx: {type: 'a'}
if it would also be useful to see requests in the scope of "all alerting uses".
Sorry, I'm not quite following. Could you elaborate on it, please?
if it would also be useful to see requests in the scope of "all alerting uses".
Sorry, I'm not quite following. Could you elaborate on it, please?
The reason I asked about associating multiple "things" with an ES call, is that we can somehow run some aggs over the logs (assuming they are ingested into ES), looking for "all ES calls associated with alerting" as well as "all ES calls associated with this alert type" etc. Basically, have a super-general "this is from an alerting rule" but also associate the specific rule types, or if the rule type has different types of queries that it wants to do special accounting for, via aggs.
So, however we store these "multiple contexts", we'd like to be able to query on specific ones. I assume this won't be a problem, just wanted to mention it. I'd be happy even if the mappings for these aren't available (type object / enabled false), or hard to access (type nested | type flattened), as long as we can access via runtime fields.
This issue is mostly resolved by https://github.com/elastic/kibana/pull/124996
All searches executed will now have the context set to the context provided by the application or to the app name
if the application didn't provide top level context by calling useExecutionContext
.
We'll use https://github.com/elastic/kibana/issues/102629 to track solutions use of useExecutionContext
.
While this issue aims to address https://github.com/elastic/kibana/issues/97934 main concern:
provide the ability to trace ES query back to a source in Kibana code that initiated the request
, we want to lay the foundation for e2e tracing in the whole Stack. To make it happen, Kibana will rely on the built-in capabilities of APM-RUM and nodejs APM agents, and their integration with Elasticsearch service.High-level picture
Kibana Frontend
Context should allow Kibana users to unambiguously identify the source of a query in the Kibana App in the browser, Kibana server, or the
task manager
.APM RUM agent doesn't provide support for async context propagation in the browser. Kibana will have to implement manual context passing.
A plugin creates an
execution context
object with API provided by Core. Returned value is opaque to the plugin.Obtained
execution context
should be passed to the Kibana server manually through all the layers of abstractions in Kibana. Kibana sets it as a custom request header before issuing a request to the Kibana server:For the first implementation, we start with
context
capturing the single context level -visualizations
. In the next iteration, we can add support for nested execution contexts. It can be used to compose execution context relationships across different apps.Application service context
-->Dashboard context
-->Visualization context
.Server-side
Depends on: APM agents can be used without APM server https://github.com/elastic/apm-agent-nodejs/issues/2101
traceparent
header.v7.15
.traceparent
header will be used for log correlation across Kibana and Elasticsearch server. To make it possible, Kibana should addtrace.id
to the log records. TODO: discuss with the Elasticsearch team in what form they are going to include it into the Elasticsearch logs. It's likely will be present in ECS-JSON logs by default. Presence in the Text logs is discussable.execution context
from the'kbn-context'
header. The context +trace.id
are emitted to Kibana logs. The minimal subset of theexecution context
data, in the formkibana:type:name:id
(kibana:visualization:gauge:1234-5678
, for example) is attached to the current APM transaction askibanaContext
label.execution context
on the server-side as well. The context passing works in the same way as for the client-side counterpart.kibanaContext
label tox-opaque-id
header. It allows Stack users to identify the source of a query inslowlogs
without the necessity to inspect Kibana logs. TODO: discuss with the Elasticsearch teamtrace.id
is included in theslowlogs
as well.Instrumentation
The list of instrumentation points should be discussed with every team separately. We are primarily interested in instrumenting plugins that may cause performance problems in Elasticsearch:
During the initial implementation, the Core team will instrument several plugins and implements integration testing as an example. Later, we will create separate issues for code owners to help us with this work.
List of sub-tasks
Context propagation
slowlogs
Log correlation
trace.id
in the logs for log correlation purposes. https://github.com/elastic/kibana/issues/102699