Closed 11Sadegh11 closed 1 year ago
Thx for your feedback. Sklearn's implementation is way more complex than my own. I just used a linear trapezoid method on 250 datapoints of the roc. In my testcases I only experienced a deviation in the third decimal place of the result in comparison to sklearn's roc_auc_score. This was totally fine to me, so I sticked with it. Do you experience higher deviations than that? If yes could you maybe provide an example.
Thx for your feedback. Sklearn's implementation is way more complex than my own. I just used a linear trapezoid method on 250 datapoints of the roc. In my testcases I only experienced a deviation in the third decimal place of the result in comparison to sklearn's roc_auc_score. This was totally fine to me, so I sticked with it. Do you experience higher deviations than that? If yes could you maybe provide an example.
Yes I do, Following is what I tried and got auroc=0.9407 from sklearn roc_auc_score but auroc of yours was about 0.966.
X, y = make_classification(n_samples=5000, n_features=5, n_classes=2, weights=[0.8,0.2], random_state=42)
trainX, testX, trainy, testy = train_test_split(X, y.ravel(), test_size=0.2, random_state=4,\ stratify=y.ravel())
clf = SVC(class_weight='balanced')
Iso_cal = CalibratedClassifierCV(base_estimator= clf, cv= StratifiedKFold(n_splits=10, shuffle=True, random_state=42),\ method='isotonic', ensemble=False) ;
Iso_cal.fit(X= trainX, y= trainy)
Iso_pr = Iso_cal.predict_proba(testX)
Isocal_auc = roc_auc_score(y_true= testy, y_score= Iso_pr[:,1]) # 0.9407546309981278
Isocal_auroc = CalibrationEvaluator(testy, Iso_pr[:,1], outsample=True, n_groups='auto').auroc # 0.9662035668538773
That's quite an difference! I decided to drop my own auroc implementation completely in favor of Sklearn's. The release of the fixed version will come within the next few days.
That's quite an difference! I decided to drop my own auroc implementation completely in favor of Sklearn's. The release of the fixed version will come within the next few days.
It would be great.
I just switched to sklearn's implementation of roc_auc_score
and released a new version of pycaleva
. 😸
Hello Mr. Weigl, first of all I want to thank you for your great library that was really required. Actually I faced a problem while I was checking some attribute of your library i.e. in some case auroc score from CalibrationEvaluator was much different from roc_auc_score that is implemented in sklearn library. I hope you could find and fix the problem. Best regards!