Open johnren-code opened 1 year ago
Hi, this is a normal behavior and the precision will not get much better. These training metrics are only a rough estimate of the actual performance of the model and are here only to know when it has converged.
At the end of this training, I got a precision around 0.08-0.09 and recall at 0.43. Your training does not seem to have converged at the end, so I would keep training it longer, until it completely converges.
OK, I get it. Thank you very much! I will train it longer later.
Another question, when I view the "draw_ellipses" folder in the Synthetic Shapes datasets, I found that there seems to be no labels for ellipse, is this normal and will this cause MagicPoint to be less effective in extracting features from images with curved shapes? I run export_detections.py on draw_ellipses datasets after training MagicPoint with Synthetic Shapes datasets and found there is no detected points for ellipse.
What's more, I wonder in the coco.py ->preprocessing->resize, does it mean resize [width, height] or [height, width]?
Sorry to bother you, looking forward to your reply.
MagicPoint is a corner detector and there is no corner on an ellipse. So no labels for ellipses :) The network should then learn not to predict keypoints on curved lines.
Sizes in the config are always given in [height, width] format.
OK, I got it! Thank you very much for your reply!
when i finish training magic point in step3, i found the precision very low, but the recall is normal, can you explain the reason? [11/29/2022 22:29:45 INFO] Start training 2022-11-29 22:29:53.097880: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 [11/29/2022 22:36:39 INFO] Iter 0: loss 4.5983, precision 0.0006, recall 0.0548 [11/29/2022 22:57:42 INFO] Iter 1000: loss 1.1150, precision 0.0010, recall 0.0859 [11/29/2022 23:18:41 INFO] Iter 2000: loss 0.3939, precision 0.0023, recall 0.1901 [11/29/2022 23:39:18 INFO] Iter 3000: loss 0.3312, precision 0.0022, recall 0.1798 [11/29/2022 23:56:05 INFO] Iter 4000: loss 0.2633, precision 0.0122, recall 0.2463 [11/30/2022 00:12:44 INFO] Iter 5000: loss 0.1940, precision 0.0234, recall 0.2626 [11/30/2022 00:29:20 INFO] Iter 6000: loss 0.2116, precision 0.0388, recall 0.2918 [11/30/2022 00:45:58 INFO] Iter 7000: loss 0.2066, precision 0.0448, recall 0.3151 [11/30/2022 01:02:36 INFO] Iter 8000: loss 0.1764, precision 0.0506, recall 0.3261 [11/30/2022 01:19:13 INFO] Iter 9000: loss 0.2105, precision 0.0622, recall 0.3253 [11/30/2022 01:35:50 INFO] Iter 10000: loss 0.1604, precision 0.0577, recall 0.3444 [11/30/2022 01:52:28 INFO] Iter 11000: loss 0.1802, precision 0.0713, recall 0.3414 [11/30/2022 02:09:05 INFO] Iter 12000: loss 0.1931, precision 0.0656, recall 0.3455 [11/30/2022 02:25:43 INFO] Iter 13000: loss 0.1713, precision 0.0710, recall 0.3588 [11/30/2022 02:42:20 INFO] Iter 14000: loss 0.1700, precision 0.0716, recall 0.3644 [11/30/2022 02:58:58 INFO] Iter 15000: loss 0.1748, precision 0.0693, recall 0.3788 [11/30/2022 03:15:36 INFO] Iter 16000: loss 0.1943, precision 0.0720, recall 0.3784 [11/30/2022 03:32:14 INFO] Iter 17000: loss 0.2031, precision 0.0750, recall 0.3746 [11/30/2022 03:48:48 INFO] Training finished [11/30/2022 03:48:48 INFO] Saving checkpoint for iteration #18000