Open sivaramakrishnan-rajaraman opened 1 year ago
Hello @sivaramakrishnan-rajaraman ,
As per the documentation, the height and width of the input images should be divisible by 32.
As a result, (256, 256) and (256, 512) work but (272, 256) throw a ValueError
.
Here is the link to the documentation.
@kp3393 Excellent! Thanks for the response.
I am using the Qubvel segmentation models https://github.com/qubvel/segmentation_models repository to train an Inception-V3-encoder based model for a binary segmentation task. I am using (256 width x 256 height) images to train the models and they are working good. If I double one of the dimensions, say for example, (256 width x 512 height), it works fine as well. However, when I make adjustments for the aspect ratio and resize the images to a custom dimension, say (272 width x 256 height), the model throws an error as follows:
ValueError: A
Concatenatelayer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(None, 16, 18, 2048), (None, 16, 17, 768)]
Is there a way to use such custom dimensions to train these models? I am using RGB images and grayscale masks to train the models. Thanks.