neurodata / scikit-learn

scikit-learn-tree fork: A fork that enables extensions of Python and Cython API for decision trees
https://scikit-learn.org
BSD 3-Clause "New" or "Revised" License
7 stars 6 forks source link

Checking the performance of classifiers in high dimensional noise setting. #1

Open sahanasrihari opened 4 years ago

sahanasrihari commented 4 years ago

Description

The sklearn's example on comparing the different classifier accuracies does not have multiple settings for testing various scenarios. There is no concrete example showing when some of these algorithms win and when they lose. One scenario to consider is - given a dataset of a relatively low order dimension, how does the accuracy of classifiers change with respect to the addition of noise dimensions.

Noise dimensions are any features added across the dimensions of the dataset which bears no relevance to the original signal dimensions.

Goal To check the performance of Random Forest, Support Vector Machine and K Nearest Neighbours as three different classifiers for the additions of gaussian noise across three different variance values.

Proposed changes in the form of PR I am proposing a new tutorial in the form of a jupyter notebook containing all the code from data generation to the computation of accuracies across noise dimensions. The final figure will contain a plot of the original datasets adopted from https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html and 9 different plots of "Accuracy Vs Number of Noise Dimensions" for the 3 different datasets and 3 different variances of gaussian noise. The plot will containing the testing accuracies across 50 trials of the experiment.

Here is a link to the code: https://github.com/NeuroDataDesign/team-forbidden-forest/blob/master/Sahana/FINAL_PR_classifiers.ipynb

bdpedigo commented 4 years ago

This issue is unclear about what you are actually proposing to PR into sklearn. You can be a lot more detailed about the fact that you are proposing a new tutorial, what the figures will be, what the data is

sahanasrihari commented 4 years ago

Description

The sklearn's example on comparing the different classifier accuracies does not have multiple settings for testing various scenarios. There is no concrete example showing when some of these algorithms win and when they lose. One scenario to consider is - given a dataset of a relatively low order dimension, how does the accuracy of classifiers change with respect to the addition of noise dimensions.

Noise dimensions are any features added across the dimensions of the dataset which is bears no relevance to the original signal dimensions.

Goal To check the performance of Random Forest, Support Vector Machine and K Nearest Neighbours as three different classifiers for the additions of gaussian noise across three different variance values.

Proposed changes in the form of PR I am proposing a new tutorial in the form of a jupyter notebook containing all the code from data generation to the computation of accuracies across noise dimensions. The final figure will contain a plot of the original datasets adopted from https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html and 9 different plots of "Accuracy Vs Number of Noise Dimensions" for the 3 different datasets and 3 different variances of gaussian noise. The plot will containing the testing accuracies across 50 trials of the experiment.

Code: https://github.com/sahanasrihari/scikit-learn/blob/master/examples/classification/CLASSIFIER_COMPARISON_PR.ipynb

This issue is unclear about what you are actually proposing to PR into sklearn. You can be a lot more detailed about the fact that you are proposing a new tutorial, what the figures will be, what the data is

@bdpedigo I have made changes to the issue, please let me know if there is need of adding more detail.