Our solution aims at identifying energy-heavy CPU function-calls in a software application. We built a Power BI report that combines output from a processed Impact Framework manifest with detailed application logging on individual function-calls. This way, system-wide statistics on energy usage can be decomposed into the exact system functionalities used, to assess which functionalities consume (unexpectedly) high amounts of energy. Additionally, the dashboard allows executing what-if analyses based on running the application in another emission zone, either during daytime or nighttime.
We hope to inspire contributors to the Impact Framework to develop such decomposition capabilities in the framework itself.
Problems
In the current state of Green IT, focus should be on creating awareness. To create awareness, it is essential to start by giving insights into energy usage of software. Creating awareness and managing and improving software for energy consumption and carbon emissions starts with accurate, correct and granular measurements and models. While the Impact Framework is a great starting point, we found that measuring on a more granular, function-call level, is not yet commonly applied. Analyzing IT energy utilization on a function-call level allows much more in-depth analysis and opens a range of possibilities for analyses and improvement towards greener IT.
Carbon intensity changes per country and over time, depending on the marginal energy mix each country has. Linking granular, per function-call energy consumption information to information about global electricity supply allows judging whether specific function-calls may be cancelled, improved code-wise, moved in time or to server in other emission zones.
Combining function-call data with time and geography creates a large, dense set of data. To be able to use this information optimally when (re)designing software and infrastructure, the data has to be structured and displayed coherently. Our Power BI Dashboard gives a great example of how this can be done.
Application
Our presented solution is a Power BI model which loads three input files.
The first input file is the output manifest from the Impact Framework, containing system-level energy usage and carbon emissions in a default emission zone due to CPU and memory utilization on a per second granularity.
The second input file contains detailed application logging, in this case individual UI actions done by end users as well as scheduled system function-calls. However, it could also be used in other contexts, for example database queries. This file contains the amount of CPU cycles and memory (RAM) used by each function-call, and the timestamp of each function-call.
The second input file allows decomposing the system-level energy usage and emissions into energy usage and emissions by individual function-calls. System-level energy usages and emissions are prorated over individual function-calls by the ratio of CPU cycles and memory used by each function-call compared to the total in that second.
The third input file contains historical grid intensity per emission zone per hour. This allows the user to perform what-if analysis on energy usage by the system in different emission zones compared to the default emission zone from the manifest.
Prize category
Best Content
Judging criteria
Overall Impact
Firstly, we hope to make Green IT easier to grasp for the wider public, moving green IT more and more from niche to common practice:
Our solution proves how easy it is to create and use good, tangible, understandable and workable insights into energy-utilization on a function-call level.
Our dashboard demonstrates that even people with limited technical knowledge are able to find real opportunities for carbon savings.
On the journey to promoting green IT, our case study can serve as demo, showing developers, software engineers, consultants and managers how easy and fun Green IT can be.
Secondly, our solution aims to show the potential benefit that can be gained when energy-utilization can be decomposed to function-call level. In our opinion, embedding this into the Impact Framework itself would simply mean more and easier carbon savings.
Clarity
Our deliverable, the PowerBI Dashboard, is very intuitive and easy to use for all levels of expertise. There is not much more to say about it: try it out yourself!
Innovation
Combining function-call granularity with time and geography/location is uncommon. At the same time, though, this opens up an array of possibilities for carbon-savings.
Power BI: see first page of Power BI report linked in previous section.
Repository: see readme.md in repository linked in previous section.
Process
Firstly, we obtained system-level statistics on CPU utilization and memory usage. As our application is an on-premises solution running on Windows Server, we used Windows Performance Monitor for this. This results in per second system-level statistics.
Secondly, we obtained detailed application logging on individual system calls including data collection. Luckily, this was already finalized before the hackathon started.
Our initial goal was to use the Impact Framework directly to merge the different input datasets. Unfortunately, we ran into performance and functional challenges, which we will describe in a later section. In the end, we decided to use the Impact Framework to calculate system-level energy usage, and to do the decomposition into function-level energy usage in Power BI. We hope to inspire contributors to the Impact Framework to develop such decomposition capabilities in the Impact Framework itself. If it’s possible in Power BI, it should be possible in JavaScript!
Inspiration
We, as a company, want to deliver efficient software that produces the least emissions. To do that, we implement software in a programming language which typically has a low energy usage. On top of that, we aim to make the right choices for carbon-efficient architecture, algorithms, data structures and infrastructure. Nevertheless, relatively energy-heavy function-calls are sometimes unavoidable. Consuming significant amounts of energy, these function-calls are emission-heavy too.
Our measurements are extraordinarily granular, allowing measurements per function-call, but they lack actual emissions based on the energy usage combined with server location. Therefore, we wanted to investigate how these granular measurements can be used to further reduce carbon emissions. Additionally, for software that does not need to run 24/7, it is also useful to investigate what energy-heavy function-calls can be reduced, for example function-calls and daemons need to run while the software is not being used.
Challenges
The biggest challenge we faced was the fact that we worked with two input datasets, one of which containing time-series data (system-level utilization per second) and the other containing individual observations starting at a particular moment in time (system calls).
The Impact Framework currently works with a single input data array. To be able to make our calculation work completely within the Impact Framework, first and foremost you would need to be able to join data outputs in the framework (system-level stats per second) to another input data array (individual observations within those seconds). Up to our knowledge, currently no plugin exists for this. Particularly in this context, it would also help if plugins exist that allow you to import data from external data files (CSV, Parquet, etc.), while storing a hash in the actual manifest to ensure auditability.
Other challenges faced:
WattTime API granularity is per 5 minutes, calling it with input data with a one-second granularity does a lot of unnecessary API calls, some caching of results can help here. Also, calling it on a one-second granularity results in empty data being returned most of the time. Probably it would be best if the Impact Framework plugin always makes the API calls with 5 minute granularity.
Plugins to help transform input timestamps can help (e.g. floor timestamps to the nearest hour).
Accomplishments
We are most proud of being able to conclude that it can be very easy to identify carbon-heavy function-calls, and to find adjustments that ensure that less emissions are omitted for those function-calls. By accomplishing this, we have a good basis to inspire others that it is actually possible and sometimes easy to make changes to a software application to reduce the overall emissions. This was only possible by having the granular data available and with the use of Impact Framework and Energy maps.
We also like the simplicity of our Power BI Dashboard. Although it is very simple and compact, it still allows users to find potential gains in terms of carbon emissions. We truly believe that people with little experience are able to use the dashboard to draw meaningful and actionable conclusions.
Learnings
Many learnings were drawn during the Hackathon, the most important of which are listed below:
We learned a lot more about Green Software and the IF.
That the IF is ready to use and works quite well.
That actual and tangible results – in terms of reducing carbon emissions - can be reached with limited effort.
That having measurements on a function-call level is essential to improving software for carbon usage. In fact, we conclude that we will need to have these measurements for many more of our applications moving forward.
The hackathon made us more aware of where we still have blind spots on energy usage of our applications. For example, we do not have the same level of detailed analysis on database query level yet, while it is likely that energy usage on the database side is actually higher than on the application server.
What’s next?
Our solution will contribute long term to the impact framework eco-system by showing others how easy and rewarding the application of the Impact Framework can be. We furthermore hope to inspire contributors to the Impact Framework to develop decomposition capabilities to function-call level in the Impact Framework itself.
A next step to our research is investigating the seasonality effect, to see whether or not it is feasible to change server locations between different seasons. Furthermore, we would be really interested to include the following aspects in the Impact Framework and/or our dashboard:
Energy on database levels
Energy utilization due to hard drive, data transfer, etc.
Summary
Our solution aims at identifying energy-heavy CPU function-calls in a software application. We built a Power BI report that combines output from a processed Impact Framework manifest with detailed application logging on individual function-calls. This way, system-wide statistics on energy usage can be decomposed into the exact system functionalities used, to assess which functionalities consume (unexpectedly) high amounts of energy. Additionally, the dashboard allows executing what-if analyses based on running the application in another emission zone, either during daytime or nighttime.
We hope to inspire contributors to the Impact Framework to develop such decomposition capabilities in the framework itself.
Problems
In the current state of Green IT, focus should be on creating awareness. To create awareness, it is essential to start by giving insights into energy usage of software. Creating awareness and managing and improving software for energy consumption and carbon emissions starts with accurate, correct and granular measurements and models. While the Impact Framework is a great starting point, we found that measuring on a more granular, function-call level, is not yet commonly applied. Analyzing IT energy utilization on a function-call level allows much more in-depth analysis and opens a range of possibilities for analyses and improvement towards greener IT.
Carbon intensity changes per country and over time, depending on the marginal energy mix each country has. Linking granular, per function-call energy consumption information to information about global electricity supply allows judging whether specific function-calls may be cancelled, improved code-wise, moved in time or to server in other emission zones.
Combining function-call data with time and geography creates a large, dense set of data. To be able to use this information optimally when (re)designing software and infrastructure, the data has to be structured and displayed coherently. Our Power BI Dashboard gives a great example of how this can be done.
Application
Our presented solution is a Power BI model which loads three input files.
The first input file is the output manifest from the Impact Framework, containing system-level energy usage and carbon emissions in a default emission zone due to CPU and memory utilization on a per second granularity.
The second input file contains detailed application logging, in this case individual UI actions done by end users as well as scheduled system function-calls. However, it could also be used in other contexts, for example database queries. This file contains the amount of CPU cycles and memory (RAM) used by each function-call, and the timestamp of each function-call. The second input file allows decomposing the system-level energy usage and emissions into energy usage and emissions by individual function-calls. System-level energy usages and emissions are prorated over individual function-calls by the ratio of CPU cycles and memory used by each function-call compared to the total in that second.
The third input file contains historical grid intensity per emission zone per hour. This allows the user to perform what-if analysis on energy usage by the system in different emission zones compared to the default emission zone from the manifest.
Prize category
Best Content
Judging criteria
Overall Impact Firstly, we hope to make Green IT easier to grasp for the wider public, moving green IT more and more from niche to common practice:
On the journey to promoting green IT, our case study can serve as demo, showing developers, software engineers, consultants and managers how easy and fun Green IT can be.
Secondly, our solution aims to show the potential benefit that can be gained when energy-utilization can be decomposed to function-call level. In our opinion, embedding this into the Impact Framework itself would simply mean more and easier carbon savings.
Clarity Our deliverable, the PowerBI Dashboard, is very intuitive and easy to use for all levels of expertise. There is not much more to say about it: try it out yourself!
Innovation Combining function-call granularity with time and geography/location is uncommon. At the same time, though, this opens up an array of possibilities for carbon-savings.
Video
Link to YouTube
Artefacts
View-version of Power BI Dashboard Repository for used IF manifest, example datasets and editable version of Power BI Dashboard:
Usage
Power BI: see first page of Power BI report linked in previous section. Repository: see readme.md in repository linked in previous section.
Process
Firstly, we obtained system-level statistics on CPU utilization and memory usage. As our application is an on-premises solution running on Windows Server, we used Windows Performance Monitor for this. This results in per second system-level statistics.
Secondly, we obtained detailed application logging on individual system calls including data collection. Luckily, this was already finalized before the hackathon started.
Our initial goal was to use the Impact Framework directly to merge the different input datasets. Unfortunately, we ran into performance and functional challenges, which we will describe in a later section. In the end, we decided to use the Impact Framework to calculate system-level energy usage, and to do the decomposition into function-level energy usage in Power BI. We hope to inspire contributors to the Impact Framework to develop such decomposition capabilities in the Impact Framework itself. If it’s possible in Power BI, it should be possible in JavaScript!
Inspiration
We, as a company, want to deliver efficient software that produces the least emissions. To do that, we implement software in a programming language which typically has a low energy usage. On top of that, we aim to make the right choices for carbon-efficient architecture, algorithms, data structures and infrastructure. Nevertheless, relatively energy-heavy function-calls are sometimes unavoidable. Consuming significant amounts of energy, these function-calls are emission-heavy too.
Our measurements are extraordinarily granular, allowing measurements per function-call, but they lack actual emissions based on the energy usage combined with server location. Therefore, we wanted to investigate how these granular measurements can be used to further reduce carbon emissions. Additionally, for software that does not need to run 24/7, it is also useful to investigate what energy-heavy function-calls can be reduced, for example function-calls and daemons need to run while the software is not being used.
Challenges
The biggest challenge we faced was the fact that we worked with two input datasets, one of which containing time-series data (system-level utilization per second) and the other containing individual observations starting at a particular moment in time (system calls).
The Impact Framework currently works with a single input data array. To be able to make our calculation work completely within the Impact Framework, first and foremost you would need to be able to join data outputs in the framework (system-level stats per second) to another input data array (individual observations within those seconds). Up to our knowledge, currently no plugin exists for this. Particularly in this context, it would also help if plugins exist that allow you to import data from external data files (CSV, Parquet, etc.), while storing a hash in the actual manifest to ensure auditability.
Other challenges faced:
Accomplishments
We are most proud of being able to conclude that it can be very easy to identify carbon-heavy function-calls, and to find adjustments that ensure that less emissions are omitted for those function-calls. By accomplishing this, we have a good basis to inspire others that it is actually possible and sometimes easy to make changes to a software application to reduce the overall emissions. This was only possible by having the granular data available and with the use of Impact Framework and Energy maps.
We also like the simplicity of our Power BI Dashboard. Although it is very simple and compact, it still allows users to find potential gains in terms of carbon emissions. We truly believe that people with little experience are able to use the dashboard to draw meaningful and actionable conclusions.
Learnings
Many learnings were drawn during the Hackathon, the most important of which are listed below:
What’s next?
Our solution will contribute long term to the impact framework eco-system by showing others how easy and rewarding the application of the Impact Framework can be. We furthermore hope to inspire contributors to the Impact Framework to develop decomposition capabilities to function-call level in the Impact Framework itself.
A next step to our research is investigating the seasonality effect, to see whether or not it is feasible to change server locations between different seasons. Furthermore, we would be really interested to include the following aspects in the Impact Framework and/or our dashboard:
Energy on database levels
Energy utilization due to hard drive, data transfer, etc.
Embedded carbon
[X] I agree to the hackathon Rules & Terms and Code of Conduct