abramjos / Exudation-of-Eye

Using UNet for detection of Exudation. Medical Imaging
5 stars 1 forks source link

Problem with dimensions?? #1

Open dimitheodoro opened 4 years ago

dimitheodoro commented 4 years ago

By running the code i get this Input 0 is incompatible with layer conv2d_29: expected ndim=4, found ndim=3 i changed model=UNet((width,width)) to model=UNet((width,width,3)) and get this Error when checking input: expected input_6 to have 4 dimensions, but got array with shape (0, 1) i altered the main.py `TRAIN_PATH='......my path to.../train' seed=42 random.seed = seed np.random.seed = seed

im_list=os.listdir(os.path.join(TRAIN_PATH,'images')) mask_list=os.listdir(os.path.join(TRAIN_PATH,'masks'))

X_train=[] Y_train=[]

width=1024

for n, id_ in tqdm(enumerate(im_list), total=len(im_list)): im=cv2.imread(os.path.join(TRAIN_PATH+'/','images/',im_list[n])) im=cv2.resize(im,(width,width),interpolation = cv2.INTER_CUBIC) X_train.append(im)

for n, id_ in tqdm(enumerate(mask_list), total=len(mask_list)): mask=cv2.imread(os.path.join(TRAIN_PATH+'/','masks/',mask_list[n]),0) mask=cv2.resize(mask,(width,width),interpolation = cv2.INTER_CUBIC) Y_train.append(mask)

X_train=np.array(X_train) Y_train=np.array(X_train)

Y_train=Y_train.reshape(Y_train.shape+(1,))

model=UNet((width,width,3)) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['mse',dice_coef]) model.summary()

earlystopper = EarlyStopping(patience=5, verbose=1) checkpointer = ModelCheckpoint('unet.{epoch:02d}-{val_loss:.2f}.h5',monitor='val_dice_coef', verbose=1, save_best_only=True) results = model.fit(X_train, Y_train, validation_split=0.1, batch_size=16, epochs=2, callbacks=[earlystopper, checkpointer])` and get

Error when checking target: expected conv2d_55 to have 4 dimensions, but got array with shape (191, 1024, 1024, 3, 1)

So i am really wondering what is happening or if i do something wrong??

abramjos commented 4 years ago

Hi dimitheodoro,

my input dataset is a 3 channel input. You don't have to give model=UNet((width,width,3)) as the model always take 3rd channel. You can simply provide model=UNet((width, height)) and that's all. The model will consider the 3 channel RGB input. Hope this helps/

dimitheodoro commented 4 years ago

Thanks for the response. But as I explained with model=UNet((width,width)) I got

**

Input 0 is incompatible with layer conv2d_14: expected ndim=4, found ndim=3)

that's why I was experimenting with model=UNet((width,width,3))**

I send you my code in one script as have implemented in Google collab.I dont use augmented images.I try just plain images to understand the code philosophy. unet_exudates.zip