Closed Tanveer81 closed 2 months ago
Hi Tanveer81, thank you for your support of our program.
Our benchmark consists of two parts, some quantification of uncertainty scores and the ROC-AUC metric. The latter can be obtained using sklearn.metrics.auc/roc_curve
. About the quantification scores, you can find the computation utils for unsupervised methods at utils/funs_get_feature_X.py
. It only requires passing into the hidden_state tensors and tracked query indexes. For supervised methods, the functions are given in supervised_generation.py
, i.e., generate_ask4conf()
and generate_uncertainty_score()
. We follow the original papers and adjust them for our setting. To use with your own model, you may have to define your own generate()
calling and prepare the dataset, and then you can simply re-use them.
Hope you have solved the problem. Please open a new one if there is any other question.
Thanks for the great work.
I want to use the baseline unsupervised and the proposed supervised method for a custom model I developed. What is the easiest way to adapt your method to a new code repository?
THanks in advance.