johko / computer-vision-course

This repo is the homebase of a community driven course on Computer Vision with Neural Networks. Feel free to join us on the Hugging Face discord: hf.co/join/discord
MIT License
376 stars 124 forks source link

Unit 12 Ethics and Bias : Draft Outline #51

Closed snehilsanyal closed 2 months ago

snehilsanyal commented 8 months ago

Hey fellow CV Course Contributors and Reviewers πŸ€—

This issue discusses an initial draft for the unit Ethics and Bias. We read a few posts, blogs searched through datasets, and created this simple and brief outline. Please feel free to share your views on this, for us to improve this unit.

We prepare this unit by combining theoretical concepts, case studies, and practical examples, and finally close with HuggingFace's mission and efforts to emphasize Ethical AI for Society. The structure is slightly inspired by HF Audio Course.

1. Introduction

The ImageNet Roullete Case Study and Implications

2. Ethics and Bias in AI

What is Ethics and Bias? Why does it matter? Include short examples and reflect on previous ImageNet Roullete example.

3. How bias creeps in AI Models (text, vision, speech)

Give one example each for each modality with implications and mention the HF space: https://huggingface.co/spaces/society-ethics/

4. Types of bias in AI

5. Bias evaluation in CV models and Metrics

Some example case studies:

We can also include other studies that discriminate based on gender, ethnicity and other factors.

Example citing/mention, use learnings from the blog: Bias in Text to Image models https://huggingface.co/blog/ethics-soc-4

6. Bias mitigation in CV models

7. HuggingFace's efforts: Ethics and Society

References:

  1. Ethics and Society Letter HuggingFace 1 to 5
  2. HuggingFace Audio Course for Structure (great flow of theory, practical examples, and notebooks)
  3. Ethics course Fast.ai
  4. Kaggle microcourse on Introduction to AI Ethics
  5. Montreal Ethics AI for Computer Vision
  6. Ethical Dimensions of Computer Vision Datasets

Please let us know about the content, suggestions are highly welcome πŸ€— πŸš€ πŸ”₯

mmhamdy commented 8 months ago

Looks amazing! Looking forward to reading it.

johko commented 8 months ago

Hey, thanks for taking on this very important section.

I really like your outline, especially your approach of basing the content on studies and real examples, which to me is super important for this topic.

As I'm also not an expert in the field, I cannot give much more feedback regarding the content, but it looks pretty solid to me :+1:

Maybe someone from the HF ethics team can have a look at the outline and give some suggestions. What do you thinjk @merveenoyan ?

snehilsanyal commented 8 months ago

Thank you @johko for your comments πŸ€— πŸ€— We tried designing the outline based on the previous audio course by HF as it has a good structure. We would love insights and suggestions from the HF Ethics team to improve the content of the unit πŸ€— I also came across this today: https://huggingface.co/spaces/merve/measuring-diversity maybe we can include this as well.

lunarflu commented 8 months ago

Maybe someone from the HF ethics team can have a look at the outline and give some suggestions.

Good idea!

merveenoyan commented 8 months ago

@snehilsanyal Hello πŸ‘‹ I really liked your outline! 🀩 pinging @giadilli and @meg-huggingface here. Would be nice to take a look πŸ‘€

giadilli commented 8 months ago

It looks good to me! Indeed, we already talked about bias in two different newsletters (one and two), plus you can check out our colleague's latest work StableBias.

snehilsanyal commented 8 months ago

Thank you @merveenoyan πŸ€— for your comments. Thank you @giadilli πŸ€— for your valuable comments, we will be using the Ethics and Society newsletter for content in the chapter. Presently the course contributors are also talking about Gradio demos to be included in the course so we were thinking of including demos in our unit as well. So spaces/demos similar to StableBias will be super useful if we include this as this will make it more visually appealing and more playground-like (good for course participants). I also recall reading the 4th issue of the newsletter and discussions on bias on text-to-image models, and I think we can include this around the section Bias in CV Models.

ATaylorAerospace commented 8 months ago

I would also include if possible some of Google's skin tone research recommended practices https://skintone.google/recommended-practices This could be an addon to Section 3, 4 and 5 especially as it relates to Computer Vision

ATaylorAerospace commented 8 months ago

Hey, thanks for taking on this very important section.

I really like your outline, especially your approach of basing the content on studies and real examples, which to me is super important for this topic.

As I'm also not an expert in the field, I cannot give much more feedback regarding the content, but it looks pretty solid to me πŸ‘

Maybe someone from the HF ethics team can have a look at the outline and give some suggestions. What do you thinjk @merveenoyan ?

@merveenoyan and @giadilli How are you? Are there opportunities for community input or volunteering etc for the HF Ethics and Bias Team? I would love to help in any way with those efforts at HF as well.

snehilsanyal commented 8 months ago

Hey @ATaylorAerospace , thank you for your comments, please DM me in discord, I will add you to the group DM where we work on Ethics and Bias Unit πŸ€—