With EVA2, I ran the Instance Segmentation with Custom Dataset. I trained with lazyConfig_train_net.py. Training results are also available in terminals and also stored as metric.json.
The attached picture is visualized with metric.json. Loss is also stable and performance has increased well.
I was curious to have unstable performance in terminal. It is also stored in log.txt. For example, segm_ap was 20, then went up to 40 in the next evaluation and went down to 25.
It keeps repeating, but at some point, the gap between the two is reduced, but why is the results of Metric.json and Log.txt different?
With EVA2, I ran the Instance Segmentation with Custom Dataset. I trained with lazyConfig_train_net.py. Training results are also available in terminals and also stored as metric.json.
The attached picture is visualized with metric.json. Loss is also stable and performance has increased well.
I was curious to have unstable performance in terminal. It is also stored in log.txt. For example, segm_ap was 20, then went up to 40 in the next evaluation and went down to 25.
It keeps repeating, but at some point, the gap between the two is reduced, but why is the results of Metric.json and Log.txt different?
Is it because it's evaluation in four GPUs?