I have some questions in connection with training your detection network (epropnp_det_basic.py config unchanged) from strach.
Why is the weight file I got (634.88 MB) much bigger than the provided epropnp_det_basic.pth (212.82 MB)?
Below you an find the evaluation results (1)(2). Are these normal after 12 epochs? I find the construction vehicle, barrier and trailer results strange. Maybe they are under represented in the dataset and they have less than 10% precision and accuracy in every operating point so the area under the precision-recall curve is considered 0 and the same logic for the TP metrics? I also attached some info about the classes in the mini dataset (3) and the results on the training set after 12 epochs (4)(5).
(1):
(2):
(3):
(4):
(5):
As I was inspecting the tensorflow logs I noticed there are no evaluation losses computed and tracked? Why is that? Is the measurement of nuscenes metrics enough to detect overfitting?
I also included the logs and tensorflow logs from my training as a zip file:
logs.zip
The checkpoint contains not only weights but also optimizer states, so it should be about 3x the size
EPro-PnP-Det does not work very well with small training set, so the results can be pretty bad when trained on the mini set
Validation loss calculation is disabled by default (otherwise training would take a lot more time), and I think the metrics are good enough for detecting overfitting
Hi!
I have some questions in connection with training your detection network (epropnp_det_basic.py config unchanged) from strach.
Why is the weight file I got (634.88 MB) much bigger than the provided epropnp_det_basic.pth (212.82 MB)?
Below you an find the evaluation results (1)(2). Are these normal after 12 epochs? I find the construction vehicle, barrier and trailer results strange. Maybe they are under represented in the dataset and they have less than 10% precision and accuracy in every operating point so the area under the precision-recall curve is considered 0 and the same logic for the TP metrics? I also attached some info about the classes in the mini dataset (3) and the results on the training set after 12 epochs (4)(5). (1): (2): (3): (4): (5):
As I was inspecting the tensorflow logs I noticed there are no evaluation losses computed and tracked? Why is that? Is the measurement of nuscenes metrics enough to detect overfitting?
I also included the logs and tensorflow logs from my training as a zip file: logs.zip
Thanks in advance for your help!