Closed eledhwen closed 5 months ago
Hello,
Thanks for the contribution. If I understand well, you use CodeCarbon in BentoML and would expose CodeCarbon measures in the /metrics
of BentoML, right ?
You could call traker._prepare_emissions_data()
at any time but your proposition seems better than calling a function we tag as private.
Hello, thanks for reporting your use case! Would you have an example of how you expose your metrics to Prometheus scrapping ? (even if it's not your code). My guess is that we might be able to update current Prom integration rather than adding a new parameter to the init of the tracker.
Hello, thanks for reporting your use case! Would you have an example of how you expose your metrics to Prometheus scrapping ? (even if it's not your code). My guess is that we might be able to update current Prom integration rather than adding a new parameter to the init of the tracker.
The main problem is, standard management of prometheus metrics goes through in-memory representations of counters, gauges, etc. This is how it works in abstractions like micrometer, and this is how it is implemented in BentoML as well.
My understanding is that prometheus currently supports an official library in python that does just that, it manages metrics representation and can expose them in a variety of ways, but it doesn't look like it has that much traction at the moment, and at the very least, it is not how metrics were implemented in a BentoML application. So we cannot rely on a "standard" prometheus implementation of this use-case.
On the other hand, it doesn't make any sense for codecarbon to have a native integration with the BentoML metrics API.
Enabling composition-based expansion points however, would allow this use-case, and many more. But I can see how it conflicts with current design.
I made a separate "demo" PR of a relatively non-invasive way of unifying "live" output handler executions.
Output: unifying execution of 'live' output handlers #459
But even writing this, it looks like we could go even further and add some kind of flag or wrapper around BaseOutput
for contextualizing when they should be executed (ie. during flush
/exit
and/or live
ops).
Thanks for providing this example. First, we ideally would like to provide native support for your use case, without having to add custom handlers, but it may be the ultimate solution. This is intended to avoid having complex implementations of emission data aggregation on the user side, as this logic is already managed internally in the tracker. From what I understand, it would be possible to build the bentoML graphs, as they rely on the official python Prometheus Client. This is already implemented by the push mode, in the PrometheusOutput class. Let's first add the support of gauge metrics, on the pull mode, using the PrometheusOutput, then add support for other metrics type. What would you think about that ?
cc @inimaz
If I understand correctly, you plan on leveraging the fact that BentoML's implementation of prometheus metrics is backed by the official client, so in theory as long as the client uses a single registry for all its metrics, manipulating the client directly should result in CodeCarbon metrics to show up in BentoML's /metrics
endpoint.
I'm still partial to having the ability to expand output strategies in a more flexible manner, but for our current use-case I could see that working, yes :-)
Hello, Sorry for the delay of response, but we finally have some news on this topic. @inimaz refactored the output handler in #552, is it still compatible with your vision ? We are planning to add another OTEL consumer output, Logfire in #542, meaning that we know have 3 differents needs to produce a correct abstraction. If you still think your implementation would be better, we'll merge it. Are you able to rebase in autonomy, or should we help you ?
If you go that way indeed it would probably make sense to start leveraging the common abstraction and work by compositing BaseOutput
implementations.
However, in the initial PR I made an attempt to not introduce breaking changes in the current behaviour, depending on your stance on API evolution this could be the right moment for deprecating the various output-specific configuration options, and make it so instead of something like (this is pseudo-code):
emission_tracker = EmissionTracker(
enable_prometheus=True,
prometheus_gateway_endpoint="http://localhost:9000/api",
enable_logging=True,
logging_level=INFO,
#...
)
instead it would be entirely determined through composition such as:
emission_tracker = EmissionTracker(
outputs=[
PrometheusOutput(gateway="http://localhost:9000/api"),
LoggingOutput(logging_level=INFO),
CSVOutput(),
CustomOutput(),
#...
]
)
This would probably make things level but may require some more refactoring work, and preserving back-compatibility may end up being confusing. What do you think?
I'm personally okay with that, it extends the tracker capabilities and we could do this addition without breaking the current usage. Your initial implementation is still valid, we could rebase it, merge & produce a pre-release in order for you to test it. This gives us time to implement the composite output, as we are finishing the Logfire output topic.
I like this! That could potentially solve as well #541.
Just to keep in mind, since we are letting the user save all the config in a .ini
file, we should somehow continue to let them do this. This could be done with some kind of mapping so that if they say in the file save_to_file=True
--> is equivalent to
emission_tracker = EmissionTracker( outputs=[ CSVOutput() ])
Closing this as this has become redundant with #459
Hi there,
This one needs some context as to what is being attempted, because from what I could tell our use-case didn't quite fit anywhere in the code base as it currently stand.
We are considering using codecarbon for systematic carbon measurements in a model serving context. The gist of it is we have fleets of BentoML-based applications and we would like to make it so all of them measure their carbon emissions and expose them in quasi-real-time through their
/metrics
prometheus endpoint (which exists by default for all BentoML apps).From what I could gather from the codebase, there already is a "Prometheus integration" in codecarbon but it works with a push gateway, which we absolutely do not want to introduce as it would turn the carbon emission tracking from an opt-out, to an opt-in with additional deployment requirements. Basically, we want the carbon footprint to be tied to existing performance metrics that will be collected for typical monitoring needs.
So, working on how to integrate codecarbon measurements with BentoML metrics reporting, I found that while there is a
BaseOutput
contract that can be extended, the tracker implementations do not allow much in the way of composition.What I did here is naively introduce an arbitrary list of
BaseOutput
implementations expected to be triggered at the same time the API calls are triggered. This way, we can plug a custom output implementation that performs whatever is needed with BentoML's implementations of metric counters, gauge, etc. But, from what I can tell this isn't quite in the spirit of the current codebase.So, what do you think?
From my point of view the more abstract composition model is preferable in every scenario, but I understand it would break compatibility with earlier versions to simply drop the current settings for API and Prometheus. Adding generic composition on top of the current settings looks odd, but at least it's transparent.