In the past the WG has discussed the challenges with calculating a SCI score that can be compared to other scores. The WG is discussing an approach where standardized hardware and workloads are defined that software can be tested on to make different software with the same core functionality comparable. We are scoping out what it would take to establish a standard like this.
In the past the WG has discussed the challenges with calculating a SCI score that can be compared to other scores. The WG is discussing an approach where standardized hardware and workloads are defined that software can be tested on to make different software with the same core functionality comparable. We are scoping out what it would take to establish a standard like this.
The WG is actively working taking notes here: https://docs.google.com/document/d/1rCNbwKiegUorrtuBuw-Ywdum1rHBWfAL-gsaOmGCcpc/edit#heading=h.3skgaml8cvuo
This addresses questions raised in:
340 Define Letter (A to G) Based Energy Ratings for a Given Software Application
233 Investigate a software procurement standard
326 Repeatability of Energy measurements
208 Software system boundary requirements for comparing scores
308 Design Time versus Runtime - I as a constant
348 Research project on Sustainable Computing
260 Could we calculate the impact of Algorithm selection on overall carbon emission of the software?
253 Using SCI to measure purchased software
244 SCI depends on hardware and usage
189 Consider how uncertainty from simplification will impact the goals of transferability and comparability