Closed mmorellareply closed 11 months ago
In case you have a enterprise support with AWS, can you please cut a ticket to us through that channel regarding this issue please?
@mmorellareply You marked the issue has closed, Did you figure out what the cause was?
Describe the question Our objective is to instrument our python code, residing in a task in an ECS Fargate cluster, and send custom metrics to Grafana through Prometheus. We've set up an ECS task with an aws-otel-collector sidecar. No error logs are present on cloudwatch, nor metrics are getting pushed from the collector to the Amazon Managed Prometheus instance we have. We are seeking a solution or hints regarding wheter any mistake was made either in the task definition, config file or python code. Thank you!
Steps to reproduce if your question is related to an action Create an ECS Fargate cluster, a task, a AMP instance, and run the container on a cluster. The task definition defines and pushes custom metrics to send to prometheus.
What did you expect to see? I expected the aws-otel-collector to be able to scrape metrics that are exposed through the python code and send them to Amazon Managed Prometheus.
Environment The following is the task definition:
The main.py is structured as follow:
Checking through a local run of the code, I can curl the endpoint localhost:8080 and retrieve the metrics:
Additional context We've previously tried to push metrics to cloudwatch through the EMF exporter without avail.
We also tried with a custom configuration file through SSM Parameter Store: