numfocus / outreachy-contributions-2023

This repository will be used to capture Outreachy applicants' contributions during the Applications phase - May-July 2023 Cohort
BSD 3-Clause "New" or "Revised" License
16 stars 4 forks source link

First Contribution by Joan Ifeanyi #57

Open Jobryte opened 1 year ago

Jobryte commented 1 year ago
NAME: Joan Amarachukwu IFEANYI

PROJECT: Interpret (https://interpret.ml/, GitHub - interpretml/interpret: Fit interpretable models. Explain blackbox machine learning.) InterpretML is an open-source package that integrates many machine learning interpretability algorithms. In order to help users comprehend the causes of individual forecasts and the overall behaviour of models, it provides the ability to train both interpretable glassbox models and explain black box systems. Interpretability is crucial for a number of tasks, including model debugging, feature engineering, spotting unfairness problems, enhancing human-AI collaboration, ensuring regulatory compliance, and high-risk applications in the legal, financial, and healthcare sectors. Users can comprehend and trust their model's judgments with InterpretML, ensuring that they operate properly and in compliance with the law.

GOVERNANCE MODEL: The Governance Model has various sections that break down the governance and code of conduct of Interpret. Details about the governance model of InterpretML and Interpret can be respectively found in this link: https://github.com/interpretml/governance; https://github.com/interpretml/interpret/blob/develop/GOVERNANCE.md The InterpretML open-source project seeks to streamline the process of developing and analyzing machine learning models. Interpret Code of Conduct and Governance Model is intended to encourage openness, diversity, and community-driven decision-making. The InterpretML governance model has the following key values:

  1. Openness and transparency
  2. Inclusivity and diversity
  3. Community-driven decision-making

Openness and transparency: The InterpretML decision-making process is open and transparent. Everyone can provide updates or new features, and the maintainers and other contributors will debate and assess these suggestions. After a proposal is approved, it undergoes a code review procedure before being merged into the project's main branch. In order for the entire community to view and take part in the process, the debates and choices are documented via GitHub issues, pull requests, and other project-related communication channels.

Inclusivity and diversity: The goal of InterpretML is to create a friendly, accepting community where everyone is welcomed and respected. Regardless of one's background, level of experience, or other characteristics, the project invites contributions from all parties. The project's maintainers actively work to foster an environment that is inclusive and diverse, and they make an effort to make sure that all opinions are heard and taken into account when making decisions. Details are gotten here: https://github.com/interpretml/governance/blob/master/code-of-conduct.md

Community-driven decision-making: Decisions are made collaboratively and by consensus because InterpretML is a community-driven project. The project's maintainers serve as decision-makers and facilitators, but they also solicit suggestions and criticism from the larger community. The governance structure of the project is set up to guarantee that choices are made in a transparent and inclusive manner and that the requirements and viewpoints of the community are taken into consideration. https://github.com/interpretml/governance/blob/master/TSC.md

PROJECT ROLES: A core group of maintainers for InterpretML is in charge of the project's overall development and direction. These maintainers are listed on the project's GitHub page and are added or removed by current maintainers excluding the maintainer in question or by the oversight of the Organization's Technical Steering Committee ("TSC"). In addition to the core staff, the project also has contributors who contribute code, updated documentation, and other materials. No matter their machine learning expertise, anyone can contribute to the project..

DECISION-MAKING PROCESS: InterpretML follows an open and transparent decision-making process that is documented on the project's GitHub page. Anyone can propose changes or new features, and these proposals are discussed and evaluated by the maintainers and other contributors. Once a proposal has been accepted, it goes through a code review process before being merged into the main branch of the project. Also decisions may be appealed.

HOW EASY/DIFFICULT TO FIND & UNDERSTAND THE GOVERNANCE MODEL At first, it was neither too easy nor difficult for me to locate. I took time to look through the whole project to understand what InterpretML was about. At some point I almost got lost on github as it is my first time working on an open-source project.Once I retraced my steps to the initial link it was quite easy to locate the governance model for Interpret. I want to suggest that both the link to InterpretML and interpret can be added to the task with clear description so that one can easily know when they are off track. Otherwise the project is pretty interesting and beautifully structured

ADDITIONAL INTEREST-RELATED AREAS: Industries that can and currently do employ InterpretML include: Healthcare, banking, and retail are just a few of the several areas where InterpretML could find use. Examples of particular applications using InterpretML include recognizing fraudulent credit card transactions, forecasting customer turnover in the telecom sector, and predicting hospital readmissions.

Government, where there is a need for accountability and transparency in decision-making, and the legal sector, where interpretable models are required for legal and ethical reasons, are two sectors that could benefit greatly from InterpretML.

Features: Microsoft created the open-source InterpretML package for interpreting machine learning (ML) models. It offers a set of tools and methods for understanding the actions and choices made by machine learning models, as well as for improving the transparency and comprehensibility of these models. The following are some of InterpretML's key characteristics:

Models that can be understood: InterpretML offers a variety of models that can be understood, such as decision trees and linear models, which are intended to be clearer and easier to understand than complicated black-box models. Identifying the most crucial characteristics in the data and comprehending how the model uses them are made possible by InterpretML's tools for estimating feature importance scores for a particular machine learning model.

Global and local explanations: InterpretML offers both international and regional explanation techniques. In contrast to local explanation approaches, which offer granular insights into how the model makes decisions on specific instances, global explanation methods give a broad perspective of how the model makes decisions over the entire dataset.

Model diagnostics: InterpretML offers instruments for identifying possible problems with machine learning models, such as bias and overfitting. These tools can assist users in locating and fixing model flaws to enhance model performance and dependability.

Tools for visualization: For analyzing machine learning models, InterpretML offers a variety of visualization tools. With the help of these tools, users may examine the connections between various features, comprehend how the model makes use of various features, and see how the model decides to act.

Overall, InterpretML is an effective tool for understanding machine learning models, increasing their transparency, and providing additional context for their results. With the help of its capabilities, users can better understand how their models arrive at judgments and raise the efficiency and dependability of those models

Jobryte commented 1 year ago

Hi @arliss-NF here is my first contribution. Thank you