Open hamedwaezi01 opened 1 year ago
I integrated the loss in a simple feedforward model. I had to code the learn
and test
methods again, as they needed specific changes.
A new dataset was also defined. The code for superloss itself, which was requested from the authors, is in this file.
ANN with superloss logs, and ANN with simple Binary Cross Validation with Logits' log
the metrics :
with superloss
test set -> AUCROC: 0.9664882 | AUCPR: 0.9345212 | accuracy: 0.9229287 | precision: 0.8766390 | recall: 0.9267327
without superloss
test set -> AUCROC: 0.9805629 | AUCPR: 0.9581877 | accuracy: 0.9426247 | precision: 0.9146374 | recall: 0.9403026
We can see that superloss marginally worsen the metrics on the test set. It is notable that we ran superloss with default parameters, and if we optimize these values, we might end up with a better argument. I have not reviewed the literature yet, so if you @rezaBarzgar have any ideas or notes about the parameters or other things, please leave them here.
@hamedwaezi01 I am currently testing different parameters on the OpeNTF project but have yet to find a suitable value. I will continue experimenting. I will share my experience and any positive results I achieve with the parameters.
I opened this issue to investigate the results of Dynamic Superloss suggested in this paper.
@hosseinfani @rezaBarzgar