EdwardSmith1884 / GEOMetrics

Repo for the paper "GEOMetrics: Exploiting Geometric Structure for Graph-Encoded Objects"
MIT License
117 stars 12 forks source link

Pretrained models? #1

Closed jcjohnson closed 5 years ago

jcjohnson commented 5 years ago

Would it be possible to provide the pretrained models used in your paper?

EdwardSmith1884 commented 5 years ago

A pretrained model is now available. Sorry it took so long

KnightOfTheMoonlight commented 4 years ago

@EdwardSmith1884 There seems only one generic model for all categories. It is different from the category-specific training strategy you put in your paper.

EdwardSmith1884 commented 4 years ago

Yes this is true, I was not aware that cross category training was the norm when I released the paper. I released a model that is trained cross category to be easy for the community to use.

KnightOfTheMoonlight commented 4 years ago

@EdwardSmith1884 So what's the new results from this generic model? Is it better than the results in the paper? Could you share it?

EdwardSmith1884 commented 4 years ago

Sure thing, here are the results on the test set:

class: bench, f1: 0.74 class: cabinet, f1: 0.60 class: car, f1: 0.66 class: cellphone, f1: 0.80 class: chair, f1: 0.64 class: lamp, f1: 0.79 class: monitor, f1: 0.65 class: plane, f1: 0.90 class: rifle, f1: 0.93 class: couch, f1: 0.57 class: speaker, f1: 0.52 class: table, f1: 0.74 class: watercraft, f1: 0.77 Total f1 is 0.716

As you can see, nearly all classes improved.

KnightOfTheMoonlight commented 4 years ago

@EdwardSmith1884 Thanks! Besides, have you checked the visualization results?

And could you share the specific training parameter setup to reproduce this number. Cause from my initial training, I can only obtain mean f1_score 0.40.

EdwardSmith1884 commented 4 years ago

Use the parameter --best_accuracy. To get these results I just ran with this parameter, trained until validation F1 score no longer improved, and selected the model which achieved the best validation accuracy.

EdwardSmith1884 commented 4 years ago

I am quite confused as to why you are getting this low value. Even on models trained for attractive output I get around .60 F1 score.