Open SparkElf opened 1 year ago
Hello,I would be appereciate if you could explain why the original self2self.py didn't use noisy image to train the model and if I used the noisy image , the result went wrong:the image seemd too white than the ground truth.
I also have this problem.
Hello,I would be appereciate if you could explain why the original self2self.py didn't use noisy image to train the model and if I used the noisy image , the result went wrong:the image seemd too white than the ground truth.
I also have this problem.
I also have thin problem
Hello,I would be appereciate if you could explain why the original self2self.py didn't use noisy image to train the model and if I used the noisy image , the result went wrong:the image seemd too white than the ground truth.
Recently I also tried to reproduce the code, and I think I could answer your problem. You may notice that the author here used the mask for the input with multiplier 0.7 when training, which is unnecessary because the original tensorflow version used tf.nn.dropout(x, keep_drop) which will apply a multiplier 1/keep_drop. And the reason for your brighter output may be caused by that: the code here did not apply the multiplier 0.7 when evaluating despite training input.
Hello,I would be appereciate if you could explain why the original self2self.py didn't use noisy image to train the model and if I used the noisy image , the result went wrong:the image seemd too white than the ground truth.