Open zmoog opened 10 months ago
First I need to create a new event hub to receive the metrics.
For this purpose, I will reuse the Event Hub Namespace called mbranca-general
, and I will create a new event hub named allmetrics
:
Next step, open an existing App Function and set up a diagnostic settings to route all metrics to the allmetrics
event hub:
I need to invoke the function to generate some data.
I will invoke the HTTP trigger for this Azure Function using curl:
$ watch -n 1 curl -i https://return-of-the-jedi.azurewebsites.net/api/hello\?name\=mbranca
I want to inspect the metrics event published on the event hub. I will use the eventhubs CLI tool to tap into the event hub.
export EVENTHUB_CONNECTION_STRING="Endpoint=sb:// ..."
export EVENTHUB_NAMESPACE="mbranca-general"
export EVENTHUB_NAME="allmetrics"
$ eh -v eventdata receive
Receiving events from allmetrics
Now metrics so far (messages is still flat to zero). I wander if I'm missing something.
I set up the diagnostic settings at 11:30 pm, and the metrics start flowing at 1:42 am, or 2:22 mins later. Take your time, diagnostic settings.
Here is the metrics flow in the event hub:
I am using the eventhubs CLI tool to inspect the metric events published in the event hub.
#
# Set the environment variable to target the event hub
#
export EVENTHUB_CONNECTION_STRING="..."
export EVENTHUB_NAMESPACE="mbranca-general"
export EVENTHUB_NAME="allmetrics"
#
# Receive all the metrics and redirect them in the `mbranca-general.allmetrics.ndjson` file.
#
eh eventdata receive > mbranca-general.allmetrics.ndjson
Let's see how a metric event looks like:
$ tail -n 1 mbranca-general.allmetrics.ndjson | jq
{
"records": [
{
"count": 2,
"total": 41,
"minimum": 20,
"maximum": 21,
"average": 20.5,
"resourceId": "/SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI",
"time": "2023-11-30T07:39:00.0000000Z",
"metricName": "Requests",
"timeGrain": "PT1M"
},
{
"count": 2,
"total": 38,
"minimum": 17,
"maximum": 21,
"average": 19,
"resourceId": "/SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI",
"time": "2023-11-30T07:40:00.0000000Z",
"metricName": "Requests",
"timeGrain": "PT1M"
},
....
The message structure looks similar to the log events:
records
listLet's pick one metric and turn its value into a table:
$ cat mbranca-general.allmetrics.ndjson | jq -r '.records[] | select(.metricName == "Http2xx") | [.metricName,.minimum,.maximum,.average,.resourceId] | @tsv' | tail -n 20
# columns: .metricName, .minimum, .maximum, .average, .resourceId
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 21 22 21.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 22 21 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 22 21 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 22 21 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 21 21 21 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 17 21 19 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 18 20 19 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 21 26 23.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 20 20 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Http2xx 20 21 20.5 /SUBSCRIPTIONS/123/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI
Let's try to set up a generic Event Hub integration to collect these metrics and send them into a data stream in Elasticsearch.
First, I'll use the default configuration, so the documents will land into the logs-azure.eventhub-default
data stream.
Created a new agent policy named "Azure Metrics (from Event Hub)":
Added a generic Event Hub integration:
https://github.com/zmoog/public-notes/assets/25941/c0215766-6d95-4fb7-85a9-73164fcfc924
And I assigned the policy to an agent.
Give a few seconds to agent to apply the new policy and start working.
Open Discover, and select the logs-*
dataview. We are using an integration for logs, but besides the data stream type, documents are documents:
https://github.com/zmoog/public-notes/assets/25941/11ad7f31-c7b2-4c7d-97ac-dbe35d865ed8
By default, the input and the integration are able to turn the following JSON object:
{
"average": 19.5,
"count": 2,
"maximum": 20,
"metricName": "Requests",
"minimum": 19,
"resourceId": "/SUBSCRIPTIONS/12CABCB4-86E8-404F-A3D2-1DC9982F45CA/RESOURCEGROUPS/MBRANCA-TEST-RG/PROVIDERS/MICROSOFT.WEB/SITES/RETURN-OF-THE-JEDI",
"time": "2023-11-30T08:00:00.0000000Z",
"timeGrain": "PT1M",
"total": 39
}
Into this document in Elasticsearch:
Here are some notes I collected outside this issue thread.
==Azure Monitor doesn't include dimensions in the exported metrics data==, that's sent to a destination like Azure Storage, Azure Event Hubs, Log Analytics, etc.
You can export the platform metrics from the Azure monitor pipeline to other locations in one of two ways:
Use the metrics REST API. Use diagnostic settings to route platform metrics to: Azure Storage. Azure Monitor Logs (and thus Log Analytics). Event hubs, which is how you get them to non-Microsoft systems.
Diagnostic settings in Azure Monitor https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings
Supported metrics with Azure Monitor https://learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-metrics/metrics-index
Some Azure resources (for example, App Functions) have diagnostic settings that support exporting metrics alongside logs.
I want to try routing these metrics to an event hub and try to ingest them using the Filebeat
azure-eventhub
input.