Closed tman5 closed 1 year ago
cc @kgeckhart @thepalbi
@tman5 I'm unable to reproduce this error, can you provide more info about how you are running the agent?
I tested with a config I know works without any issues and running the config you provided only produces an error from azure which means it got past the request validation. I ran the latest agent docker image, and the v0.32.1 docker image.
facing the same issue running the docker image for grafana agent- latest version
docker run -p 12345:12345 -v C:\azmetrics_grafanaagent:/etc/agent -e AZURE_TENANT_ID=xxxxxxxxxxxxxxxxxxx -e AZURE_CLIENT_ID=xxxxxxxxxxxxxxxxxxxxxxx -e AZURE_CLIENT_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxx -v C:\azmetrics_grafanaagent\agent.yaml:/etc/agent/agent.yaml -d grafana/agent
config: integrations: azure_exporter: enabled: true scrape_integration: false
metrics: configs:
the docker container is coming up but when i log in to the pod using exec, do a curl it shows the following error
curl localhost:12345/integrations/azure_exporter/metrics config to be used for scraping was invalid, subscriptions cannot be empty,resource_type cannot be empty,metrics cannot be empty
@kgeckhart I am running the newest agent as a straight binary service on a Rocky 8 linux machine.
/usr/bin/grafana-agent --config.file /etc/grafana-agent.yaml -disable-reporting -server.http.address=127.0.0.1:9090 -server.grpc.
$ curl localhost:9090/integrations/azure_exporter/metrics
config to be used for scraping was invalid, subscriptions cannot be empty,resource_type cannot be empty,metrics cannot be empty
Oh I see the problem @tman5 and @manvitha9347. When you curl the integration url directly with your configuration setup in that manner it's expecting you to provide the parameters in the curl command. If you want to see the full request to initiate the scrape you can run,
curl localhost:12345/agent/api/v1/targets | jq '.data[] | select(."discovered_labels"."__metrics_path__" == "/integrations/azure_exporter/metrics") | { job: .labels.job, endpoint: .endpoint }'
and based on the example it should show something like
{
"job": "azure-kubernetes-node",
"endpoint": "http://localhost:12345/integrations/azure_exporter/metrics?included_dimensions=node&included_dimensions=nodepool&included_resource_tags=environment&metrics=node_cpu_usage_millicores&metrics=node_cpu_usage_percentage&metrics=node_disk_usage_bytes&metrics=node_disk_usage_percentage&metrics=node_memory_rss_bytes&metrics=node_memory_rss_percentage&metrics=node_memory_working_set_bytes&metrics=node_memory_working_set_percentage&metrics=node_network_in_bytes&metrics=node_network_out_bytes&resource_type=microsoft.containerservice%2Fmanagedclusters&subscriptions=%3Csubscription%3E"
}
@kgeckhart Thanks for that. Yes, I can curl using that command and get the endpoint. However, I'm still not getting metrics written. I see this in the stdout of the agent:
grafana-agent[1549512]: ts=2023-06-13T13:22:42.732174606Z caller=zapadapter.go:84 level=error integration=azure_exporter msg="config to be used for scraping was invalid, subscriptions cannot be empty,resource_type cannot be empty,metrics cannot be empty"
I see this in the stdout of the agent:
I cannot help further without the curl command and/or the config. Either the config you are using is invalid or you did not update your curl command from the output of the command from my previous response.
Using the example I sent previously the curl command would be,
curl http://localhost:12345/integrations/azure_exporter/metrics?included_dimensions=node&included_dimensions=nodepool&included_resource_tags=environment&metrics=node_cpu_usage_millicores&metrics=node_cpu_usage_percentage&metrics=node_disk_usage_bytes&metrics=node_disk_usage_percentage&metrics=node_memory_rss_bytes&metrics=node_memory_rss_percentage&metrics=node_memory_working_set_bytes&metrics=node_memory_working_set_percentage&metrics=node_network_in_bytes&metrics=node_network_out_bytes&resource_type=microsoft.containerservice%2Fmanagedclusters&subscriptions=%3Csubscription%3E`
@kgeckhart Note my agent is running on port 9090. This is the curl command from the previous command:
curl http://localhost:9090/integrations/azure_exporter/metrics?included_dimensions=node&included_dimensions=nodepool&included_resource_tags=environment&metrics=node_cpu_usage_millicores&metrics=node_cpu_usage_percentage&metrics=node_disk_usage_bytes&metrics=node_disk_usage_percentage&metrics=node_memory_rss_bytes&metrics=node_memory_rss_percentage&metrics=node_memory_working_set_bytes&metrics=node_memory_working_set_percentage&metrics=node_network_in_bytes&metrics=node_network_out_bytes&resource_type=microsoft.containerservice%2Fmanagedclusters&subscriptions=<subscription_id>
Running that curl command produces:
[1] 1552521
[2] 1552522
[3] 1552523
[4] 1552524
[5] 1552525
[6] 1552526
[7] 1552527
[8] 1552528
[9] 1552529
[10] 1552530
[11] 1552531
[12] 1552532
[13] 1552533
[14] 1552534
config to be used for scraping was invalid, subscriptions cannot be empty,resource_type cannot be empty,metrics cannot be empty
[1] Done curl http://localhost:9090/integrations/azure_exporter/metrics?included_dimensions=node
[2] Done included_dimensions=nodepool
[3] Done included_resource_tags=environment
[4] Done metrics=node_cpu_usage_millicores
[5] Done metrics=node_cpu_usage_percentage
[6] Done metrics=node_disk_usage_bytes
[7] Done metrics=node_disk_usage_percentage
[8] Done metrics=node_memory_rss_bytes
[9] Done metrics=node_memory_rss_percentage
[10] Done metrics=node_memory_working_set_bytes
[11] Done metrics=node_memory_working_set_percentage
[12] Done metrics=node_network_in_bytes
[13]- Done metrics=node_network_out_bytes
[14]+ Done resource_type=microsoft.containerservice%2Fmanagedclusters
My agent config is posted above
Your output shows the curl command is being split in to multiple commands you can prevent this by doing,
curl 'http://localhost:9090/integrations/azure_exporter/metrics?included_dimensions=node&included_dimensions=nodepool&included_resource_tags=environment&metrics=node_cpu_usage_millicores&metrics=node_cpu_usage_percentage&metrics=node_disk_usage_bytes&metrics=node_disk_usage_percentage&metrics=node_memory_rss_bytes&metrics=node_memory_rss_percentage&metrics=node_memory_working_set_bytes&metrics=node_memory_working_set_percentage&metrics=node_network_in_bytes&metrics=node_network_out_bytes&resource_type=microsoft.containerservice%2Fmanagedclusters&subscriptions=<subscription_id>'
Thanks for all the feedback. It looks like it is successful now. The curl command threw me off when it gave that error. And I was getting that error in stdout at some point but now I'm not. The curl command hitting the endpoint was probably causing that too. Knowing what the metrics are named helps a lot too.
Thanks for all the feedback. It looks like it is successful now. The curl command threw me off when it gave that error. And I was getting that error in stdout at some point but now I'm not. The curl command hitting the endpoint was probably causing that too. Knowing what the metrics are named helps a lot too.
What's wrong?
After following this guide: https://grafana.com/docs/agent/v0.33/static/configuration/integrations/azure-exporter-config/#multiple-azure-services-in-a-single-config
The azure_exporter will not start scraping properly with this error:
config to be used for scraping was invalid, subscriptions cannot be empty,resource_type cannot be empty,metrics cannot be empty
Steps to reproduce
Configure grafana agent per the docs and include applicable info in the params section of the config
System information
No response
Software version
Grafana Agent 0.32
Configuration
Logs
No response