Closed poult-lab closed 1 month ago
Hello! Thank you for your interest in our project.
Two methods were used to determine the threshold. 1) Using the validation dataset, increase the threshold from 0 to 1 in increments of 0.01 and specify the value that produces the best F1-score as the threshold. 2) Fixed at 0.5
Experimentally, there was no significant performance difference between the two methods.
Thank you, author, for your prompt reply.
Hello author,
I am wondering what threshold you set for the F1-score. I saw in the codebase that you used sklearn.metrics.average_precision_score() to get the AUPRC. We need to set a threshold to calculate the F1-score using sklearn.metrics.f1_score(). When the output is larger than the threshold, it is set to 1; otherwise, it is set to 0. Could you share this threshold with us?