Open YulNaumenko opened 2 months ago
Pinging @elastic/kibana-management (Team:Kibana Management)
If that helps, here is the OpenAPI spec of the Serverless Project Metrics API: https://github.com/elastic/autoops-services/blob/master/monitoring/service/specs/serverless_project_metrics_api.yaml
Hi @ashokaditya . I have a basic api contract I wanted to share with you. It doesn't take the overview metrics into consideration right now as I'd like to focus on the charts first, but feel free to add those/add placeholders how you see fit. Also feel free to change the names of things, this is just to illustrate structure.
{
"from": "2023-09-01T00:00:00Z",
"to": "2023-09-30T23:59:59Z",
"metricTypes": ["ingestedMax", "retainedMax"], // Flexible to support multiple metric types
"dataStreams": [] // Optional: If omitted or empty, return top N data streams
}
metricTypes
I am assuming we are getting max values for the metrics so maybe that should be reflected in the names. metricTypes
I'll likely call one at a time, for each chart, to improve performancesize
because it sounds like we are going to get top N which can be hard coded on the server side. I'm not seeing a use case where I would need to specify the size.dataStreams
will be empty, and I'll expect the N number of data streams to plot. Once the feature for the user searching and adding specific data streams happens, i'll expect those specific data streams. again size would not be needed here, it's the size of the dataStreams array if the array is not empty.{
"charts": [
{
"key": "ingestedMax",
"series": [
{
"streamName": "data_stream_1",
"data": [
{ "x": 1726858530000, "y": 1000000 },
{ "x": 1726862130000, "y": 1200000 },
{ "x": 1726865730000, "y": 1100000 }
]
},
{
"streamName": "data_stream_2",
"data": [
{ "x": 1726858530000, "y": 950000 },
{ "x": 1726862130000, "y": 980000 },
{ "x": 1726865730000, "y": 990000 }
]
}
]
},
{
"key": "retainedMax",
"series": [
{
"streamName": "data_stream_1",
"data": [
{ "x": 1726858530000, "y": 800000 },
{ "x": 1726862130000, "y": 850000 },
{ "x": 1726865730000, "y": 870000 }
]
},
{
"streamName": "data_stream_2",
"data": [
{ "x": 1726858530000, "y": 700000 },
{ "x": 1726862130000, "y": 720000 },
{ "x": 1726865730000, "y": 750000 }
]
}
]
}
]
}
** Can you confirm with autoOps that the data streams could be different per chart? If we request top N data streams, I would expect they could be different based on the metric types and you would receive two separate sorted arrays.
@ashokaditya I removed the timeInterval
as it looks like the charts can figure that out based on the mix and max ranges of the time series data. I also probably don't need the yUnit
right now and can assume its bytes, maybe something we'd need later but could add later.
API/UX hooks PR https://github.com/elastic/kibana/pull/193966
Page enhancements PR https://github.com/elastic/kibana/pull/195556
Use auto ops service PR https://github.com/elastic/kibana/pull/196312
Handling errors PR https://github.com/elastic/kibana/pull/197056
Integration tests PR https://github.com/elastic/kibana/pull/197112
Unit tests PR https://github.com/elastic/kibana/pull/198007
Integration with auto ops PR https://github.com/elastic/kibana/pull/200192
Enable autoops on DEv+QA PR https://github.com/elastic/serverless-gitops/pull/5188 Update URL on kibana-controller PR https://github.com/elastic/kibana-controller/pull/483
In Serverless Security and Observability projects the users should be able to analyze how much data they are ingesting (daily/weekly/etc.) and retaining over the selected period of time. Currently Kibana Index Management page shows only the storage size per datastream at the current point of time.
The goal of the issue is to build the APIs which will help to extend the current Data Streams tab with the chart and the chart management logic referred in the UI part https://github.com/elastic/kibana/issues/192966 of the scope.
The requirements to the API:
Request Body: { "from": 1725433672446, "to": 1725432672446, "size": 10, "sort": "asc", "level": "datastream", "metric_types": ["storage_retained", "ingest_rate"], "allowed_indices": ["index-1", ..., "index-n"] }
{ "metrics": { "storage_retained": [ { "name": "ds-1", "data": [ [ "timestamp", "size" ], ..... }, ..., ], "ingest_rate": [ { "name": "index-1", "data": [ [ "timestamp", "size" ], ..., }, ... ] } }