athaddius / STIRMetrics

Metric Evaluation for Models on STIR
MIT License
4 stars 1 forks source link

Baseline results on val split #8

Closed hassony2 closed 1 week ago

hassony2 commented 2 weeks ago

Thank you very much for providing these neat eval utilities !

Would it be possible for you to share the predicted files for the baselines on the validation set (to enable comparing methods without necessary having access to the hardware required by the baseline dependencies) ?

Have a great day,

Yana

athaddius commented 2 weeks ago

Yes, that makes sense! We are generating these currently, and I'll respond again once we have posted them for the publicly available val split.

athaddius commented 1 week ago

Hi, we have finished running the baselines over the validation set. They can be found here: https://github.com/athaddius/STIRMetrics/tree/main/src/results/baselines

hassony2 commented 1 week ago

Thank you for the super quick reply and providing these files :)

hassony2 commented 1 week ago

And for completeness as this might be useful, the corresponding metrics I am getting from running calculate_error_from_json2d.py are:

CONTROL (target_point = query_point)
-----
Accuracy
-----
 4 px:  0.06725
 8 px:  0.12968
16 px:  0.24205
32 px:  0.41118
64 px:  0.58144
Avg:    0.28632
==== RAFT baseline ===
----------------
Model
-----
Accuracy
-----
 4 px:  0.09421
 8 px:  0.23297
16 px:  0.48099
32 px:  0.70318
64 px:  0.84081
Avg:    0.47043
==== MFT baseline ====
Model
-----
Accuracy
-----
 4 px:  0.26901
 8 px:  0.53405
16 px:  0.76844
32 px:  0.90238
64 px:  0.95119
Avg:    0.68502
 ==== CSRT baseline==== 
----------------
Model
-----
Accuracy
-----
 4 px:  0.17168
 8 px:  0.37684
16 px:  0.58400
32 px:  0.70375
64 px:  0.79228
Avg:    0.52571 

Thank you again @athaddius for providing the files !