I use data enhancement to generate 3000 + pairs of images and mask images to train T-net. Losses are as follows
average loss: 0.75172
saving model ....
epoch end, shuffle datasets again ...
[1 / 300] loss: 0.69093 time: 1548 average loss: 0.55106
saving model .... epoch end, shuffle datasets again ...
[2 / 300] loss: 0.53391 time: 1538
average loss: 0.47728 saving model ....epoch end, shuffle datasets again ...
[3 / 300] loss: 0.46429 time: 1537 average loss: 0.42489
saving model ....epoch end, shuffle datasets again ...
[4 / 300] loss: 0.41367 time: 1537 average loss: 0.38102
saving model .... epoch end, shuffle datasets again ...
---Ellipsis part
[46 / 300] loss: 0.07379 time: 1542
average loss: 0.07126 saving model ....epoch end, shuffle datasets again ...
When the loss dropped to 0.07, I thought there would be a good image segmentation result, so I predicted that T-net (PSPNet50) did not seem to work properly.
Change the training load data code to:
In theory, T-Net (PSPNet) will have normal segmentation results, but my results don't look right, but I'm not sure where the problem is.
average loss: 0.75172 saving model .... epoch end, shuffle datasets again ... [1 / 300] loss: 0.69093 time: 1548 average loss: 0.55106 saving model .... epoch end, shuffle datasets again ... [2 / 300] loss: 0.53391 time: 1538 average loss: 0.47728 saving model ....epoch end, shuffle datasets again ... [3 / 300] loss: 0.46429 time: 1537 average loss: 0.42489 saving model ....epoch end, shuffle datasets again ... [4 / 300] loss: 0.41367 time: 1537 average loss: 0.38102 saving model .... epoch end, shuffle datasets again ... ---Ellipsis part [46 / 300] loss: 0.07379 time: 1542 average loss: 0.07126 saving model ....epoch end, shuffle datasets again ...
When the loss dropped to 0.07, I thought there would be a good image segmentation result, so I predicted that T-net (PSPNet50) did not seem to work properly.
Change the training load data code to:
In theory, T-Net (PSPNet) will have normal segmentation results, but my results don't look right, but I'm not sure where the problem is.