Green-Software-Foundation / standards-wg

GSF Standards Working Group
Other
31 stars 4 forks source link

2024.10.24 #124

Open seanmcilroy29 opened 5 days ago

seanmcilroy29 commented 5 days ago

2024.10.24 Agenda/Minutes


Time 1600 (BST) - See the time in your timezone


Antitrust Policy

Joint Development Foundation meetings may involve participation by industry competitors, and the Joint Development Foundation intends to conduct all of its activities in accordance with applicable antitrust and competition laws. It is, therefore, extremely important that attendees adhere to meeting agendas and be aware of and not participate in any activities that are prohibited under applicable US state, federal or foreign antitrust and competition laws.

If you have questions about these matters, please contact your company counsel or counsel to the Joint Development Foundation, DLA Piper.


Recordings

WG agreed to record all Meetings. This meeting recording will be available until the next scheduled meeting.


Roll Call

Please add 'Attended' to this issue during the meeting to denote attendance.

Any untracked attendees will be added by the GSF team below:


Agenda


Use Case Review (20 Mins presentation with Q&A)


SCI for AI - Defining the workshop - Navveen and Henry


Project Review updates


Articles


For Review


Note: WG use case template submission - After submitting this issue, your use case will be submitted to the WG Agenda for discussion. Article submission—Once you submit this issue, it will be assigned to the GSF Editor for review.


Future meeting Agenda submissions


Next Meeting

Reminder Note: Eur - Daylight saving ends on 27th Oct 2024 US - Daylight saving ends on 3rd Nov 2024

Adjourn


Standing Agenda / Future Agenda submissions

GadhuNTTDATA commented 1 day ago

Attended

Henry-WattTime commented 1 day ago

Attended

jmcook1186 commented 1 day ago

Attended

navveenb commented 1 day ago

Attended

filga commented 1 day ago

Attended

sami-gharbi commented 1 day ago

Attended

PindyBhullar commented 1 day ago

Attended

sasthana-tw commented 1 day ago

Attended.

seanmcilroy29 commented 1 day ago

MoM

Henry opens the meeting at 1600 (BST)

Summary Notes

The meeting focused on Amadeus' implementation of the Impact Framework for green IT, highlighting their MVP for carbon emissions measurement in Microsoft Azure. They manage 10,000 software engineers across ten countries, with 200 applications and 200 clusters. Their solution, Carmen, leverages the Impact Framework to compute CO2 and energy consumption, with a 15-second granularity for application telemetries and hourly for infrastructure. Challenges include handling shared resources and allocating costs to services. The team discussed future steps, including a potential AI emissions measurement and collaboration with the GSF for a workshop to develop the SCI standard for AI.

Minutes

Amadeus' Green IT Implementation Speaker 2 describes Amadeus' transition from a private data centre to the cloud, primarily using Microsoft Azure and Google Cloud. The organization has hundreds of clusters in 10 countries and 10,000 software engineers. The challenge is to adopt green IT practices at a large scale, to have an SEI for each application. An MVP was implemented to present carbon emissions for applications deployed in Microsoft Azure.

Introduction to Carmen The Amadeus team introduces Carmen, a branded solution leveraging the impact framework to centralize carbon measurement services. Carmen consumes application telemetries and infrastructure data to compute CO2 and energy. The solution includes monitoring carbon as a service and reporting use cases. Carmen aims to avoid re-implementing the impact framework for each application, focusing on reusability and efficiency.

Carmen's Use Cases and Demonstration Amadeus's team explains the three use cases: monitoring, carbon as a service, and reporting. The monitoring use case is based on real-time data, while the reporting use case focuses on long-term decision-making. A demo shows the pipeline for infrastructure-based reporting, which involves gathering data, computing energy and carbon, and reallocating to services. The demo highlights the scale of the data involved, with over 30,000 virtual machines and 1.6 million lines of impact framework output.

Challenges and Future Steps Amadeus's team outlines the following steps, including taking the MVP dashboard towards a product and implementing non-functional requirements. The Finops use case and measurement as a service are discussed, with plans to integrate carbon intensity APIs. The goal is to implement a carbon-aware scheduler for time and spatial shifting by 2025. Amadeus requests collaboration with the GSF to pilot the impact framework integration and share feedback.

Discussion and Questions Asim congratulates the team on their work and asks about the time granularity for computing emissions. Amadeus's team explains the granularity, with hourly data for infrastructure and 15-second data for application telemetries. Henry inquires about user feedback, and Amadeus's team mentions that users are eager to use the tool despite it being an MVP. Naveen asks if AI emissions are being measured, and Amadeus's team clarifies that they focus on machine learning for their processes.

Scalability and Allocation Challenges Gadhu asks about handling shared resources, and the Amadeus team explains the reallocation process from infrastructure to service level. Asim questions the allocation methodology, and Amadeus's team describes using the Finops library for reallocation. The discussion discusses the challenges of allocating costs and emissions to customers and the need for a technical solution. Joseph offers to help Amadeus migrate to the latest version of the impact framework for better performance.

Key considerations and challenges in allocating carbon emissions to customers and services:

  • Shared resources: When a single resource (e.g., a VM, database) is used by multiple applications or services, it can be challenging to accurately allocate the associated carbon emissions to each individual user or service.
  • Granularity of data: The level of granularity in the carbon emissions data (e.g., hourly, daily) can impact the accuracy and usefulness of the allocation to customers and services.
  • Complexity of commercial models: The commercial models and pricing structures used to sell managed services to customers may not align neatly with the carbon emission allocation, making it difficult to pass on the emissions to customers directly.
  • Lack of standardized approaches: There is currently no widely adopted standard or best practice for how to allocate carbon emissions to customers and services. Organizations may use different methodologies, making it difficult to compare or aggregate across the industry.
  • Technical limitations: The ability to accurately measure and attribute carbon emissions to specific applications, services, or customers may be limited by the available data and tooling.
  • Organizational challenges: Allocating carbon emissions may require coordination and alignment across different teams and business units, which can be organizationally complex.

The Amadeus team highlighted that they are currently facing these challenges and are working to find technical and organizational solutions to address them, potentially in collaboration with the broader community.

Workshop Planning and Next Steps Due to time constraints, Henry suggests postponing the AI SCI workshop discussion to the next meeting. Asim proposes that the GAIC and standards working group jointly decide on workshop participants and selection criteria. The overarching goal of the workshop is to define the scope and functional unit for the SCI standard for AI. Russ shares a document with questions and suggestions for the workshop, and Henry suggests sharing it with the standards working group for feedback.

SCI workshop to define the scope and functional unit for the SCI standard for AI:

  • Clearly define the problem statement and critical challenges the SCI standard for AI aims to address. This will help focus the workshop discussions.
  • Determine whether there should be a single functional unit for AI in general or if multiple functional units are needed for different classes or archetypes of AI systems. This decision should consider factors like model size, accuracy, and other relevant parameters.
  • Ensure the workshop has AI and standards experts representation. This cross-pollination of expertise will be crucial for defining the appropriate scope and functional unit.
  • Prioritize high-level consensus on the key elements rather than getting bogged down in detailed wordsmithing. The baseline specification can be iteratively refined after the workshop. Leverage existing work, such as the base SCI specification and the previous discussions from the Green AI Committee, as a starting point to build upon.
  • Prepare pre-read materials and research papers to help inform the workshop discussions and ensure participants are aligned on the field's current state.

The goal should be to produce a baseline SCI specification for AI that can serve as the community's foundation for further development and refinement.

Action Items