Green-Software-Foundation / writers

Management of GSF content and marketing efforts
3 stars 2 forks source link

[Article] Why we endorsed the Environmental Impact of AI Act #212

Closed NAMRATA-WOKE closed 3 months ago

NAMRATA-WOKE commented 4 months ago

Article Details

Please enter the details here

Proposer: Namrata Narayan | Dir. Communications & Member Relations | @NAMRATA-WOKE Abstract: Provide a brief overview of the new legislation, why the GSF endorsed the legislation, and how we hope to contribtue to the greening of AI Why: Knowledge-sharing and awareness building Audience: Sustainability leaders Timeline: March 2024

Important Information

To be filled out later

Author: @NAMRATA-WOKE Other contributors: PWG Chairs Article Link: (Link to the doc, must be in our official GSF drive) Graphics Link: (Link to the design assets for this article, must be in our official GSF drive) Published Link: (The link the article was published to for memory)

Checklist

NAMRATA-WOKE commented 4 months ago

References to pull from

https://medium.com/datasociety-points/why-were-endorsing-the-ai-environmental-impacts-act-02c73138ffc8 https://www.techpolicy.press/measuring-ais-environmental-impacts-requires-empirical-research-and-standards/

ursvill commented 4 months ago

500-800 Words

NAMRATA-WOKE commented 4 months ago

FYI @ursvill I've reached out to the PWG to support the article. As a reminder, I'd like us to focus on:

  1. Why this Act is important for organizations building AI solutions and leveraging LLLMs to solve climate change
  2. How it can enable a culture for greening software
  3. Ways it can support our projects (e.g., SCI and Impact Framework)
NAMRATA-WOKE commented 4 months ago

FYI @ursvill

From Ruby:

Hi Namrata,

Some thoughts I hope are useful below:

AI, particularly GenAI, models have huge carbon footprints, as the GSF is aware. This HBR article provides useful analysis of how and why, and the fact it includes:

The footprint to train the model The footprint from running inference with the model once deployed The footprint required to produce computing hardware and cloud data center capabilities.

Therefore, it's important to consider the following in order to make AI greener:

Leverage existing pretrained models where possible, and fine-tune these to customise them rather than building from scratch Use energy-conserving code/queries and computational methods e.g. Using the TinyML framework to process the data locally without sending it to data servers Developers should code efficiently with techniques that require less computational power Encouraging users to prompt efficiently too through training and adoption Reuse and repurpose as much as you can e.g. Repurposing baseline models for similar use cases - it may be that an existing bare-boned model can be used in multiple places, with some minor custom finetuning over the top Reusing existing reference architecture for deployment of models, and developing an AI Strategy and governance framework before implementing AI will support this. Only use AI when the value gained will be significant enough to offset both the cost and the impact e.g. if you test a use case but it only improves a process (makes it faster/less costly/more accurate) by 1% then the environmental cost is unlikely to be worth it Evaluating use cases against a value framework as part of developing your AI Strategy will support this Include carbon monitoring of AI tools in your reporting so you can frequently evaluate your impact.

Finally, and arguably most importantly, all organisations adopting AI must evaluate the energy sources of their cloud provider or data center - are they renewable or are they using energy from fossil fuels?

HBR article linked above explains that data centers are responsible for 2-3% of GHG emissions Given the fast rise in AI adoption globally and across industries in recent years that is likely to continue, ultimately making AI greener comes back to the energy transition Businesses should think about how they can make better use of AI in accelerating them towards the energy transition, as Microsoft is attempting to support the UN in doing.

There is also a social impact/human cost from more and more larger cutting-edge LLMs, touched on by ARS Technica:

Bigger and richer companies/ governments/ institutions are able to train, access and benefit from these models while others cannot Human feedback on AI models to ensure they do not provide harmful responses is being provided by underpaid and exploited workers Finally, the global energy transition will require an enormous supply of energy storage equipment and therefore raw materials we are currently not equipped with the supply of to meet demand - there are currently major human costs in the form of worker exploitation and child labour around the world in countries attempting to speed up the mining of these materials to meet global expected demand.

In terms of guidance for legislators, given the predicted increase in demand for AI and fall back on a need to accelerate the energy transition, legislators should

Enforce the development of a Responsible AI framework by all organisations using AI, including environmental, social and personal considerations that must be assessed and risks mitigated against Restrict/prevent/regulate the use of AI by organisations before putting in place a set of suitable energy transition targets/standards and demonstrating they are meeting these.

NAMRATA-WOKE commented 3 months ago

@Jenya-design can you please create a graphic for this article?

Jenya-design commented 3 months ago

@NAMRATA-WOKE please check the illustration attached and on Drive

The-GSF-Endorsed-the-AI-Environmental-Impacts-Act

NAMRATA-WOKE commented 3 months ago

Looks good, thanks @Jenya-design