Closed jcjohnson closed 5 years ago
A pretrained model is now available. Sorry it took so long
@EdwardSmith1884 There seems only one generic model for all categories. It is different from the category-specific training strategy you put in your paper.
Yes this is true, I was not aware that cross category training was the norm when I released the paper. I released a model that is trained cross category to be easy for the community to use.
@EdwardSmith1884 So what's the new results from this generic model? Is it better than the results in the paper? Could you share it?
Sure thing, here are the results on the test set:
class: bench, f1: 0.74 class: cabinet, f1: 0.60 class: car, f1: 0.66 class: cellphone, f1: 0.80 class: chair, f1: 0.64 class: lamp, f1: 0.79 class: monitor, f1: 0.65 class: plane, f1: 0.90 class: rifle, f1: 0.93 class: couch, f1: 0.57 class: speaker, f1: 0.52 class: table, f1: 0.74 class: watercraft, f1: 0.77 Total f1 is 0.716
As you can see, nearly all classes improved.
@EdwardSmith1884 Thanks! Besides, have you checked the visualization results?
And could you share the specific training parameter setup to reproduce this number. Cause from my initial training, I can only obtain mean f1_score 0.40
.
Use the parameter --best_accuracy. To get these results I just ran with this parameter, trained until validation F1 score no longer improved, and selected the model which achieved the best validation accuracy.
I am quite confused as to why you are getting this low value. Even on models trained for attractive output I get around .60 F1 score.
Would it be possible to provide the pretrained models used in your paper?