nabsabraham / focal-tversky-unet

This repo contains the code for our paper "A novel focal Tversky loss function and improved Attention U-Net for lesion segmentation" accepted at IEEE ISBI 2019.
357 stars 72 forks source link

Help with attn_reg model #7

Closed DecentMakeover closed 5 years ago

DecentMakeover commented 5 years ago

Hi

I am trying to use the attn_reg_ds and attn_reg and this is how it looks at the end.

model = Model(inputs=[img_input], outputs=[out6, out7, out8, out9])

    loss = {'pred1':loss_function,
            'pred2':loss_function,
            'pred3':loss_function,
            'final': loss_function}

    loss_weights = {'pred1':1,
                    'pred2':1,
                    'pred3':1,
                    'final':1}
    # model.compile(optimizer=opt, loss=loss, loss_weights=loss_weights)
    model.compile(optimizer=optimizer(lr = initial_learning_rate), loss=loss)
    return model

and i have modified the function to look like this

def class_tversky(y_true, y_pred): smooth = 1

    y_true = K.permute_dimensions(y_true, (1,2,3,4,0))
    y_pred = K.permute_dimensions(y_pred, (1,2,3,4,0))

    y_true_pos = K.batch_flatten(y_true)
    y_pred_pos = K.batch_flatten(y_pred)
    true_pos = K.sum(y_true_pos * y_pred_pos, 1)
    false_neg = K.sum(y_true_pos * (1-y_pred_pos), 1)
    false_pos = K.sum((1-y_true_pos)*y_pred_pos, 1)
    alpha = 0.7
    return (true_pos + smooth)/(true_pos + alpha*false_neg + (1-alpha)*false_pos + smooth)

def focal_tversky_loss(y_true,y_pred):
    pt_1 = class_tversky(y_true, y_pred)
    gamma = 0.75
    return K.sum(K.pow((1-pt_1), gamma))

but when i run this i get

DUnetCNN/unet3d/training.py", line 88, in train_model early_stopping_patience=early_stopping_patience)) File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 2230, in fit_generator class_weight=class_weight) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1877, in train_on_batch class_weight=class_weight) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1480, in _standardize_user_data exception_prefix='target') File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 86, in _standardize_input_data str(len(data)) + ' arrays: ' + str(data)[:200] + '...') ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 4 array(s), but instead got the following list of 1 arrays: [array([[[[[1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1],

Any idea on what the issue might be?

Thanks

JakobKHAndersen commented 5 years ago

Take a look at the bus_train.py file from line 80:

gt1 = imgs_mask_train[:,::8,::8,:] gt2 = imgs_mask_train[:,::4,::4,:] gt3 = imgs_mask_train[:,::2,::2,:] gt4 = imgs_mask_train gt_train = [gt1,gt2,gt3,gt4]

By performing the operations on my training data prior to fitting the model I managed to avoid the error.