Closed fjparrado closed 3 years ago
@dragonvenenoso I will check it as soon as possiable:)
I am sorry, no need, I am stupid :) The code to visualize the images should be:
A = iterator.get_next(name='{:s}_IteratorGetNext'.format(self._dataset_flags))
with tf.Session() as sess:
X = sess.run(tf.tuple(A))
x = np.zeros((256,512,3))
x[:,:] = X[1][13]/125
plt.imshow(X[0][13]*0.5+0.5);plt.show();plt.imshow(x);plt.show()
And then everything is correct.
Edit: I trained the model again and it works perfectly. Thanks a lot for sharing your code!
@dragonvenenoso ok:)
Hi,
I have been trying to train the cityscapes dataset but I was not able to reproduce the results. I am using TF1.12.0 Debugging the training code, I noticed the problem resides in the function
next_batch(self, batch_size)
in thecityscapes_tf_io.py
file.If I skip the data augmentation and the shuffle function, the images in the batch are fine:
However, if I uncomment the shuffle function, the "pairs" are:
Using the original code (uncommenting also the data augmentation lines), the transformations of the RGB image and labeled image are not the same
This is the code I used to visualize the images:
Maybe I am doing something wrong, but I can not figure it out... Do you have any suggestion? Regards