I execute the run_training script strictly according to the code, but FROC is only 0.15. I used subset0-8 for training, subset9 for testing, and separately extracted the annotations of subset9 for evaluation. What is the possible reason?
Thanks,
Here is my result:
0.25 0.009523809523809525
0.5 0.01904761904761905
1.0 0.047619047619047616
2.0 0.11428571428571428
4.0 0.2
8.0 0.3333333333333333
16.0 0.37142857142857144
[0.1564625850340136, 0.1564625850340136]
ep 131 detp -1.5 0.1564625850340136
ep 131 detp -1 0.1564625850340136
0.1564625850340136 130
Yes. The score is too low and strange. How about validating all possible models from different epochs? Did you do preprocessing properly? You can try to visualize the data and label to check the data.
I execute the run_training script strictly according to the code, but FROC is only 0.15. I used subset0-8 for training, subset9 for testing, and separately extracted the annotations of subset9 for evaluation. What is the possible reason? Thanks,
Here is my result: 0.25 0.009523809523809525 0.5 0.01904761904761905 1.0 0.047619047619047616 2.0 0.11428571428571428 4.0 0.2 8.0 0.3333333333333333 16.0 0.37142857142857144 [0.1564625850340136, 0.1564625850340136] ep 131 detp -1.5 0.1564625850340136 ep 131 detp -1 0.1564625850340136 0.1564625850340136 130