Open apavlo89 opened 4 years ago
Hi Achilleas,
thank you for your feedback and happy to hear that ppscore is helpful to you. Currently, you cannot change the classification score from F1 to ROC but I agree that this would be a useful addition. How would you like to change the score from F1 to ROC? How would you like the user experience/API to look like?
Regards, Florian
I am not sure, I'm a programming noob and still learning the ropes with ML. I'm a neuroscientist... Also, I've done some reading and different cross-validations depending on the problem and thus for my small dataset I've changed the stratified cv that PPS score uses to leave one out CV (LOOCV). That's all good and done and easy to change and apply on PPS.
But then I've read that different performance metrics are used depending on the morphology of the dataset. E.g., different metrics should be used according to whether you are trying to solve a multiclass problem or a binary classification problem. Then different metrics should be used depending on the class distribution. For example 'accuracy' is generally bad (as you are aware) but 'balanced accuracy' might be better than even an 'F1' score for a binary classification problem with imbalanced classes. In general, there are various types of accuracy metrics that have different strength and weaknesses depending on the dataset and what you are trying to achieve. It would be good to allow the user to easily choose which one they want e.g.,
pps_predictors_results= pps.predictors(dataset, y= "Group", random_seed = 16, cross_validation=LOOCV_method, scoring ='balanced_accuracy')
The scoring variable could have different accuracy metrics taken from sklearn e.g., 'accuracy', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'neg_log_loss', 'precision' etc. (suffixes apply as with ‘f1’), 'recall' etc. (suffixes apply as with ‘f1’), ‘jaccard’ etc. (suffixes apply as with ‘f1’), 'roc_auc', ‘roc_auc_ovr’, ‘roc_auc_ovo’, ‘roc_auc_ovr_weighted’, ‘roc_auc_ovo_weighted’
Am I making sense? Like I said I'm quite new to ML. I've only started learning ML a year ago.
I like the suggestion and think that this can be worthwhile. About the API proposal:
cross_validation=LOOCV_method
was it on purpose that you refer to the LOOCV_method object that might be defined somewhere or did you want to refer to it via a string?Also, would you like to take on this task?
1) Cool. 2) No I defined LOOCV_method before 3) Yes, I agree. Choosing the right performance metric depends on the data.
I wish I could but I don't have the technical knowledge to do this I'm afraid!
Alright, thank you for the answers. And no worries if you cannot work on it yourself - the suggestion and rough desgn of the user experience is also very valuable. Also, maybe you will rise to the occasion at some point - after all, we grow with our challenges :)
Take care and talk to you soon
I would also like a better approach to handling class imbalance and am happy to have a go at adding this functionality. I was thinking of making the setting of the average
parameter in the f1_score as a user optional input instead of as a hardcoded value of "weighted". This would be a pretty lightweight approach and give you all the functionality and error handling of the underlying sklearn function. How does this sound @FlorianWetschoreck ?
Hello,
First of all PPScore is so good, great job! Running only PPS score to reduce feature numbers gives better results than using other feature elimination techniques, or running it first before using other feature elimination techniques ALWAYS yields better results than running pretty much every other feature elimination technique(s) alone.
I just wanted to ask how to change from F1 score to ROC, specifically, Precision-Recall Curves (as I have moderate imbalanced classes), for my binary classification problem.
Thank you for your help in the matter.
Regards,
Achilleas