Closed illia-malachyn closed 3 months ago
[!CAUTION]
Review failed
The pull request is closed.
These changes integrate Prometheus metrics into the EVM gateway, significantly enhancing the monitoring of API errors and request response times. Key updates include modifications to the .gitignore
and Makefile
, the implementation of a metrics collection framework across various API files, and the introduction of configuration support for Prometheus scraping.
Files | Change Summaries |
---|---|
.gitignore, Makefile | Updated .gitignore to include metrics/data/ and modified Makefile to remove metrics/data/ from the start-local target. |
api/api.go, api/debug.go, api/server.go | Integrated Prometheus metrics collection into BlockChainAPI , DebugAPI , and httpServer for enhanced error tracking and request duration measurement. |
bootstrap/bootstrap.go | Added a metrics server and collector for improved metrics tracking during server operations. |
metrics/collector.go | Established a framework for metrics collection to monitor API performance and errors. |
tests/helpers.go, tests/integration_test.go | Enhanced test configurations by adding the Prometheus configuration file path to support monitoring in tests. |
sequenceDiagram
participant User
participant BlockChainAPI
participant MetricsCollector
participant Prometheus
User->>BlockChainAPI: Make API Request
BlockChainAPI->>MetricsCollector: Start Request Timer
BlockChainAPI->>BlockChainAPI: Process Request
alt Request Fails
BlockChainAPI->>MetricsCollector: Report API Error
end
BlockChainAPI->>MetricsCollector: Stop Request Timer
MetricsCollector->>Prometheus: Push Metrics Data
Prometheus->>User: Provide Metrics Data
Objective | Addressed | Explanation |
---|---|---|
Integrate Prometheus to EVM gateway (#359) | β |
In the code, metrics bloom,
Prometheus watches, no room for gloom.
Errors and times, all recorded fine,
EVM gateway shines, metrics align.
Data flows, insights show,
With Prometheus, we now know. πβ¨
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
@sideninja Hey, I'm adding this as skeleton.
Should I create separate issue for each metrics/task as I did for this one or I can just mention #125 in every PR?
To test this:
eth_blockNumber
endpointprometheus --config.file=prometheus.yml
localhost:9090/graph
api_request_duration
Are we going to run prometheus back-end as a side car ? or is it gonna be a separate server running independently? I want to know whether I should add this to docker/docker-compose
@sideninja Hey, I'm adding this as skeleton.
Should I create separate issue for each metrics/task as I did for this one or I can just mention #125 in every PR?
No, it's fine for now to reference same issue, but if you want you can add tasklist to the issue itself. But no need.
@illia-malachyn is this PR ready for review or not yet? since yesterday that I reviewed it, it changed not just based on comments but also by content (adding metrics to all APIs), so I don't know when it's complete. Generally, it will be better if you leave PR in draft as long as you still working on the content and once you finish open it for review. So adding all the metrics to the APIs could be another PR, or if part of this one (which is fine too), but better to leave PR in draft until then.
@sideninja Basically, I re-request a review after fixing your comments to start the next round of the review process. New metrics will be added in a separate PR. In this PR, I wanted to set up a prometheus and add some simple metrics to show you as an example. I can split this one up too if I'd like me to.
Will keep it as a draft If it's more natural for you!
@sideninja Basically, I re-request a review after fixing your comments to start the next round of the review process. New metrics will be added in a separate PR. In this PR, I wanted to set up a prometheus and add some simple metrics to show you as an example. I can split this one up too if I'd like me to.
Will keep it as a draft If it's more natural for you!
If it's ready for review put it in review. But I'm confused because after my first review you added metrics to APIs.
@sideninja Please, take a look at this again once you have time
@illia-malachyn can you update base branch again sorry
Closes: #359
This covers the following requirements:
Measure end-to-end request/response time by method call (track the time taken from the start of a request to the return of the response for each method call to understand relative performance and user experience using percentiles).
api_request_duration_seconds_bucket
api_request_duration_seconds_bucket{resolver="SendRawTransaction"}
API requests per time interval metric
rate(api_request_duration_seconds_count{resolver="SendRawTransaction"}[5m])
API calls by API endpoint (most used to least used calls)
sort_desc(sum by(resolver) (rate(api_request_duration_seconds_count{resolver="SendRawTransaction"}[5m])))
API errors should be submitted to a counter metric
api_errors_total
Summary by CodeRabbit
Summary by CodeRabbit
New Features
Chores
.gitignore
to adjust directory exclusions.Makefile
to clean up themetrics/data/
directory before running the application.