Open ArneTR opened 6 months ago
@ribalba FYI
Hey @ursvill @seanmcilroy29
Just checking on this. Did you see the use case? Since it was not mentioned in the last meeting I thought I give it a ping 😇
@ArneTR - I've added this to the next WG agenda for review and I'll circulate it with the members prior to the call
@seanmcilroy29 - Once approved, please let me know.
Hey @seanmcilroy29 , I updated this PR's description with what we have discussed in last WG group meeting
What I changed
I hope it is now ready for a proper use case and am happy if you bring it to the agenda for next WG meeting.
For any questions let me know!
@jawache On a side note: We heavily trimmed down how Eco-CI works reducing the overhead immensly. This work also has contributed to some upgrades for Cloud Energy which is now fully containerized and modular, which is had not been before. Happy to talk if it could be a contribution for Impact Framework. Hit me up if you still wanna talk about that.
Thank for all the feedback on Eco-CI from the last WG meeting. This helped us sharpen the output of Eco-CI and make it more clear to the end user how we derive the values.
Hey @seanmcilroy29, just a gentle ping on this in case it fell through the cracks
@ArneTR - This will be reviewed and discussed at the next Standards WG meeting
Hey @seanmcilroy29
just another gentle ping on this since it has been a while and it would be sad if this would fall into oblivion.
Was there another Standards WG meeting in the meantime? Do you need any additional info?
Hi @ArneTR and @seanmcilroy29,
The SWG can approve a submission to the sci-guide but articles are different, they have a higher bar to meet and the process is different.
The bar right now is very subjective and based on rough guidance so I've asked the team to codify what this bar is, so it's a lot clearer to everyone involved exactly what we need to see in a case study. Until we've got that doc written I've asked for a pause on all care study articles.
We're still figuring out the details but we'll be asking at least for one fully worked out example of an SCI score, with numbers, models, coefficients etc... so people can see exactly how you computed it and more importantly be able to replicate and verify the computation themselves easily.
To that end we're also going to ask for this example computation to be written in an IF manifest file format, that's becoming our standardizing format for communicating environmental impacts. We'll provide guidance/templates that you can fill out to generate one. It's not hard, you just have to stick the right numbers in the right places in a YAML file, we'll give you a template.
Our goal for case studies is so others will learn from existing examples how to compute their own SCI scores so for us they are teaching tools.
Case studies for commercial products (which I know this isn't) will have stricter requirements.
My goal is to get these guides ready by the end of this month (October) will share are soon as they are ready.
Cheers and thanks for your patience @ArneTR!
Executive Summary
This use case submission demonstrates how to derive an SCI score for a CI/CD pipeline. I will be using GitHub as an example, but it works also in GitLab, Jenkins etc.
The tool we are using is called Eco-CI and it leverages an open source Machine Learning Model implementation called Cloud Energy to estimate the Energy and CO2 consumption of the machine it is running on.
Description of problem
One major part of the modern work of a software developer is testing software. Most of this is done through platform services from Github, Gitlab and many others.
Since these services have quite generous free tiers it is often not uncommon to have pipelines run on every push or every pull request even in the smallest software projects.
As a green developer at some point the natural question arises: How much CO2 do these pipeline runs actually emit?
Since many of these runs are done inthe aforementioned platform services it is typically not possible to faciliate an actual proper measurement. But a very feasible approach is to estimate the energy and CO2 consumption based on comparable energy data from machines that are publically accesible.
How the use case solves the problem
The Eco-CI Plugin from Green Coding Solutions uses this method by leveraging the SPECpower database to train an XGBoost based ML-Model that is then integrated in a free and open-source plugin that can be natively used in Github or Gitlab pipelines.
It will then show the energy and CO2 (SCI) for every run of the pipeline.
Conveniently it also hooks into the Pull-Request feature and developers can see the information right in the conversation of the Pull-Request.
Example screenshot for the integration of the Pull Request:
SPECpower
SPECpower_ssj2008 is a benchmark developed by the Standard Performance Evaluation Corporation (SPEC) specifically designed to evaluate the power and performance characteristics of server-class computers.
The benchmark simulates a Java-based, multi-tiered workload and measures the power consumption of the server under various load conditions. It is based on real-world applications and workloads commonly found in data centers, such as web servers, application servers, and database servers.
SPECpower_ssj2008 provides a standardized way to measure the energy efficiency of servers, allowing vendors and consumers to compare the power efficiency of different server platforms. The benchmark reports the performance metric in operations per second (ops/s) and the power consumption in watts (W), which are used to calculate the overall energy efficiency score (ops/W).
Example image from a server energy curve from the SPECpower database:
Cloud Energy
All of these benchmark data is freely available in an online database and can be used to for instance train an ML model to estimate the power consumption of machine by just feeding it the know characteristics like CPU Model, RAM amount, frequency etc.
This project from Green Coding Solutions is called Cloud Energy and is also a free open source project that provides the underpinning of the Eco-CI Plugin for CI / CD.
The open Github repository shows how to setup the model on any PC running Python 3. Even parameters like CPU Make, Frequency etc. can be auto filled.
Comparison of the Cloud Energy Model estimation to a measured machine:
Eco-CI
Eco-CI is effectively then a small snippet of code that can be integrated in the typical yaml workflow files from Github and Gitlab.
Here is a sample screenshot of a PR where the plugin is integrated.
A sample integration looks like this:
The way more through documentation can be directly found on Github
The data can be optionally sent to a free hosted and open source SaaS called the Green Metrics Tool that can be also self hosted. Here the data is aggregated over time and thus changes over time are better visible.
Example screenshot from CI / CD runs for the Django project
Getting the SCI score
To derive an SCI score we need four parts:
The current energy we get from Cloud Energy. We use the CPU utilization by a simple bash script that runs every second to read from the underlying Linux procfs.
Since we need to know the charateristics of the machine we are running on we use pre-calcuated power curves based on the specifications of the system that GitHub is providing. For instance for a shared runner this would be:
Since we are in a shared environment not all the total power we would derive from the SPECpower database for a given CPU utilization shall be attributed to the pipeline run. Since Github assigns 4 threads to every runner and the CPU provides 128 threads by design it is save to assume that we can divide the power linearly and assume that our share will be 4/128
This can be done in different ways and we have a lenghty discussion on different ways to implement this in our GitHub Discussions
The same process is analogous for GitLab and can be extended for any other machine.
To now get the next value, the embodied carbon, we employ the beautiful database from Boavizta, called Datavizta. Since all the specs are know we just plug them in the calculator.
We assume a disk size of 448 GB total as this would be the total of all the clients shared disk space added up to the full machine. According to the calculator from Boavizta this would give us a total of 1,151.70 kgCO2e for the manufacturing. With a 4/128 splitting this is 35,990.625 gCO2e. We further assume that the machine will be in use for 4 years and the pro-rate the embodied carbon according to the runtime of the pipeline.
The third component, the grid intensity, we get from Electricitymaps which is already a trusted solution for the Carbon Aware SDK from the GSF.
Now the final piece, the unit of work, is luckily very easy: I is one pipeline run
Main benefits of the solution
The solution makes getting CO2 location data and Energy values simple and robust since the integration can be done with a few lines of code and by the nature of how pipelines work on Github and Gitlab the integration does not interfere with the rest of the pipeline.
Since the solution is completely free and open source it can be even integrated in enterprise environments.
What was the outcome, how were carbon emissions reduced
The solution at the moment provides a measurement only, but is planned to also port over optimizations that are currently present in the Green Metrics Tool like for instance: