Closed jodusan closed 4 years ago
Hi @dulex123 can you show me the stdout for your training session, as well as share the output of pip freeze
for your working environment?
@dulex123 I build a new environment and reinstalled from master, and I can not seem to reproduce your issue. Any logs, etc would be useful for debugging.
@gregjohnso Thanks for quick reply! I think I've made a mistake by not looking at the absolute loss. At the end loss was around 130, is this expected? (it started from ~160k)
Prediction:
Target:
Loss:
If all looks good, is the "grid-like" output expected because of convolution? (since it basically tries to predict noise?)
Thanks !
That looks more or less like the expected output to me. Be sure to play with the contrast range setting on your image viewer. Often these images look washed out because the dynamic range is not set correctly to the span of pixel values. This applies to the grid pattern as well.
Another thing to note, is that although the loss improves by a few orders of magnitude, most of that is done in the first few iterations. If you change the Y limits from 0 to say 500 you will see that it continues to converge for quite a long time.
If you want an improved result you can do a few things, including changing the amount of training data (I think the default is 40 images, try setting it to 200). Playing with some of the optimization parameters might (or might not) help too.
I'm going to close this issue but will label it so it might help out others who get stuck in the same way.
I've ran download_and_train.py script but the loss doesnt go down. Does master branch train successfully?