Closed inkerton closed 2 weeks ago
Thank you for creating this issue! We'll look into it as soon as possible. Your contributions are highly appreciated! ๐
Can you elaborate more on the approach? Didn't get the explainable AI approach!!
Explainable Ai is the set of a;gorithms that explains the working behind a prediction like how did the Ai came to the conclusion that its a tumor and not just a ump or a ball of mass. The explaination of behind the process is termed as Explainable AI.
Already present in this repository, hence closing this issue as a duplicate entry.
Deep Learning Simplified Repository (Proposing new issue)
:red_circle: Project Title : Breast Cancer Type Detection using Explainable AI :red_circle: Aim : The main objective of this project is to predict whether a tumor is benign or malignant using machine learning models and explain the modelโs decisions using LIME (Local Interpretable Model-Agnostic Explanations). The dataset used in this project is the Breast Cancer Dataset available from Kaggle. :red_circle: Dataset : https://www.kaggle.com/datasets/yasserh/breast-cancer-dataset :red_circle: Approach : Try to use 3-4 algorithms to implement the models and compare all the algorithms to find out the best fitted algorithm for the model by checking the accuracy scores. Also do not forget to do a exploratory data analysis before creating any model.
๐ Follow the Guidelines to Contribute in the Project :
requirements.txt
- This file will contain the required packages/libraries to run the project in other machines.Model
folder, theREADME.md
file must be filled up properly, with proper visualizations and conclusions.:red_circle::yellow_circle: Points to Note :
:white_check_mark: To be Mentioned while taking the issue :
Happy Contributing ๐
All the best. Enjoy your open source journey ahead. ๐