Closed snehilsanyal closed 2 months ago
Looks amazing! Looking forward to reading it.
Hey, thanks for taking on this very important section.
I really like your outline, especially your approach of basing the content on studies and real examples, which to me is super important for this topic.
As I'm also not an expert in the field, I cannot give much more feedback regarding the content, but it looks pretty solid to me :+1:
Maybe someone from the HF ethics team can have a look at the outline and give some suggestions. What do you thinjk @merveenoyan ?
Thank you @johko for your comments π€ π€ We tried designing the outline based on the previous audio course by HF as it has a good structure. We would love insights and suggestions from the HF Ethics team to improve the content of the unit π€ I also came across this today: https://huggingface.co/spaces/merve/measuring-diversity maybe we can include this as well.
Maybe someone from the HF ethics team can have a look at the outline and give some suggestions.
Good idea!
@snehilsanyal Hello π I really liked your outline! π€© pinging @giadilli and @meg-huggingface here. Would be nice to take a look π
It looks good to me! Indeed, we already talked about bias in two different newsletters (one and two), plus you can check out our colleague's latest work StableBias.
Thank you @merveenoyan π€ for your comments. Thank you @giadilli π€ for your valuable comments, we will be using the Ethics and Society newsletter for content in the chapter. Presently the course contributors are also talking about Gradio demos to be included in the course so we were thinking of including demos in our unit as well. So spaces/demos similar to StableBias will be super useful if we include this as this will make it more visually appealing and more playground-like (good for course participants). I also recall reading the 4th issue of the newsletter and discussions on bias on text-to-image models, and I think we can include this around the section Bias in CV Models.
I would also include if possible some of Google's skin tone research recommended practices https://skintone.google/recommended-practices This could be an addon to Section 3, 4 and 5 especially as it relates to Computer Vision
Hey, thanks for taking on this very important section.
I really like your outline, especially your approach of basing the content on studies and real examples, which to me is super important for this topic.
As I'm also not an expert in the field, I cannot give much more feedback regarding the content, but it looks pretty solid to me π
Maybe someone from the HF ethics team can have a look at the outline and give some suggestions. What do you thinjk @merveenoyan ?
@merveenoyan and @giadilli How are you? Are there opportunities for community input or volunteering etc for the HF Ethics and Bias Team? I would love to help in any way with those efforts at HF as well.
Hey @ATaylorAerospace , thank you for your comments, please DM me in discord, I will add you to the group DM where we work on Ethics and Bias Unit π€
Hey fellow CV Course Contributors and Reviewers π€
This issue discusses an initial draft for the unit Ethics and Bias. We read a few posts, blogs searched through datasets, and created this simple and brief outline. Please feel free to share your views on this, for us to improve this unit.
We prepare this unit by combining theoretical concepts, case studies, and practical examples, and finally close with HuggingFace's mission and efforts to emphasize Ethical AI for Society. The structure is slightly inspired by HF Audio Course.
1. Introduction
The ImageNet Roullete Case Study and Implications
2. Ethics and Bias in AI
What is Ethics and Bias? Why does it matter? Include short examples and reflect on previous ImageNet Roullete example.
3. How bias creeps in AI Models (text, vision, speech)
Give one example each for each modality with implications and mention the HF space: https://huggingface.co/spaces/society-ethics/
4. Types of bias in AI
5. Bias evaluation in CV models and Metrics
Some example case studies:
We can also include other studies that discriminate based on gender, ethnicity and other factors.
Example citing/mention, use learnings from the blog: Bias in Text to Image models https://huggingface.co/blog/ethics-soc-4
6. Bias mitigation in CV models
7. HuggingFace's efforts: Ethics and Society
This will be closing chapter of the unit, so we include efforts of HuggingFace in emphasizing Ethics for Society :D
Talk about HuggingFace's mission for transparency and reproducibility (model-cards, datasets, evaluate), other aspects
6 categories of submission of HF Spaces: Rigorous, Consentful, Socially Conscious, Sustainable, Inclusive and Inquisitive https://huggingface.co/spaces/society-ethics/about
References:
Please let us know about the content, suggestions are highly welcome π€ π π₯