xuuuuuuchen / Active-Contour-Loss

Implementation of active contour loss function
MIT License
198 stars 34 forks source link

can't get good dice score while Using AC? #3

Closed qaqzzz closed 5 years ago

qaqzzz commented 5 years ago

Hi Xu,

I had used AC Loss to try seg 2-class images. Although the loss is decreasing, dice score doesn't improve at all. On the contrary, dice score is at a fairly low value, about 0.0001.

` class ActiveContourLoss(Module): def init(self): super(ActiveContourLoss, self).init()

def forward(self, y_pred, y_true):

    x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 
    y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]

    delta_x = x[:,:,1:,:-2]**2
    delta_y = y[:,:,:-2,1:]**2
    delta_u = torch.abs(delta_x + delta_y)

    epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
    w = 1.

    lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

    C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()
    C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()

    region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
    region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper

    lambdaP = 5. # lambda parameter could be various.
    loss = lenth + lambdaP * (region_in + region_out) 

    return loss

`

this is my pytorch implemention, y_pred[:,0,:,:] what is the mean 0, is channels? and y_pred as the input need to sigmoid? the input shape is (channel, batchsize, H, W) or (batchsize, channel, H, W) . My y_pred shape is (16, 1, 512, 512).So i need modify it?

Best, qaqzzz

xuuuuuuchen commented 5 years ago

Sorry, I didn't notice that you closed it. If you are still wondering, here is my reply:

AC loss (this version) may not work well in imbalanced label problem. A hyperparameter between _regionin and _regionout may be required in practice.

0 in _ypred[:,0,:,:] means slicing the 1st channel out so _ypred will have a new size as same as C1 and C2.

Input shape is 'channel first' like (batch size, channel, H, W).

qaqzzz commented 5 years ago

Xu,

Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway.

best wishes Li

------------------ 原始邮件 ------------------ 发件人: "Xu"notifications@github.com; 发送时间: 2019年8月27日(星期二) 凌晨3:10 收件人: "xuuuuuuchen/Active-Contour-Loss"Active-Contour-Loss@noreply.github.com; 抄送: "卡玟"2964833024@qq.com;"State change"state_change@noreply.github.com; 主题: Re: [xuuuuuuchen/Active-Contour-Loss] can't get good dice score whileUsing AC? (#3)

Sorry, I didn't notice that you closed it. If you are still wondering, here is my reply:

AC loss (this version) may not work well in imbalanced label problem. A hyperparameter between region_in and region_out may be required in practice.

0 in y_pred[:,0,:,:] means slicing the 1st channel out so y_pred will have a new size as same as C1 and C2.

Input shape is 'channel first' like (batch size, channel, H, W).

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

shiqi1994 commented 4 years ago

Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li

Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :(

Thank you very much!

qaqzzz commented 4 years ago

Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li

Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :(

Thank you very much!

class ActiveContourLoss(Module): def init(self): super(ActiveContourLoss, self).init()

def forward(self, y_pred, y_true, combine=None):

    x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 
    y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]

    delta_x = x[:,:,1:,:-2]**2
    delta_y = y[:,:,:-2,1:]**2
    delta_u = torch.abs(delta_x + delta_y)

    epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
    w = 1.

    if combine is not None:
        lenth = w * torch.mean(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper
    else:
        lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

    if torch.cuda.is_available():
        C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()
        C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()
    else:
        C_1 = torch.ones(y_true.shape, dtype=torch.float32)
        C_2 = torch.zeros(y_true.shape, dtype=torch.float32)

    if combine is not None:
        region_in = torch.abs(torch.mean(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
        region_out = torch.abs(torch.mean((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper
    else:
        region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
        region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper

    lambdaP = 5. # lambda parameter could be various.
    loss = lenth + lambdaP * (region_in + region_out) 

    return loss        
shiqi1994 commented 4 years ago

Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li

Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :( Thank you very much!

class ActiveContourLoss(Module): def init(self): super(ActiveContourLoss, self).init()

def forward(self, y_pred, y_true, combine=None):

    x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 
    y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]

    delta_x = x[:,:,1:,:-2]**2
    delta_y = y[:,:,:-2,1:]**2
    delta_u = torch.abs(delta_x + delta_y)

    epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
    w = 1.

    if combine is not None:
        lenth = w * torch.mean(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper
    else:
        lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

    if torch.cuda.is_available():
        C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()
        C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()
    else:
        C_1 = torch.ones(y_true.shape, dtype=torch.float32)
        C_2 = torch.zeros(y_true.shape, dtype=torch.float32)

    if combine is not None:
        region_in = torch.abs(torch.mean(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
        region_out = torch.abs(torch.mean((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper
    else:
        region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
        region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper

    lambdaP = 5. # lambda parameter could be various.
    loss = lenth + lambdaP * (region_in + region_out) 

    return loss        

Thank you very much! And may I ask if I need sigmoid y_pred? And how do you set the value of learning rate and iteration? In my experiment, the output always like this 1570436275484 I am very confused why ac loss itself do not work...

qaqzzz commented 4 years ago

hi, please set learn rate to 0.00005. Otherwise the loss will not work. 2964833024 邮箱:2964833024@qq.com 签名由 网易邮箱大师 定制 On 10/07/2019 16:19, shiqi1994 wrote: Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li … Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :( Thank you very much! class ActiveContourLoss(Module): def init(self): super(ActiveContourLoss, self).init() def forward(self, y_pred, y_true, combine=None):

x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 

y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]

delta_x = x[:,:,1:,:-2]**2

delta_y = y[:,:,:-2,1:]**2

delta_u = torch.abs(delta_x + delta_y)

epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.

w = 1.

if combine is not None:

    lenth = w * torch.mean(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

else:

    lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

if torch.cuda.is_available():

    C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()

    C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()

else:

    C_1 = torch.ones(y_true.shape, dtype=torch.float32)

    C_2 = torch.zeros(y_true.shape, dtype=torch.float32)

if combine is not None:

    region_in = torch.abs(torch.mean(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper

    region_out = torch.abs(torch.mean((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper

else:

    region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper

    region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper

lambdaP = 5. # lambda parameter could be various.

loss = lenth + lambdaP * (region_in + region_out) 

return loss        

Thank you very much! And may I ask if I need sigmoid y_pred? And how do you set the value of learning rate and iteration? In my experiment, the output always like this I am very confused why ac loss itself do not work... — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

qaqzzz commented 4 years ago

hi, please set learn rate to 0.00005. Otherwise the loss will not work. Of course you need sigmod y_pred. 2964833024 邮箱:2964833024@qq.com 签名由 网易邮箱大师 定制 On 10/07/2019 16:19, shiqi1994 wrote: Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li … Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :( Thank you very much! class ActiveContourLoss(Module): def init(self): super(ActiveContourLoss, self).init() def forward(self, y_pred, y_true, combine=None): x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1] delta_x = x[:,:,1:,:-2]2 delta_y = y[:,:,:-2,1:]2 delta_u = torch.abs(delta_x + delta_y) epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice. w = 1. if combine is not None: lenth = w torch.mean(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper else: lenth = w torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper if torch.cuda.is_available(): C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda() C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda() else: C_1 = torch.ones(y_true.shape, dtype=torch.float32) C_2 = torch.zeros(y_true.shape, dtype=torch.float32) if combine is not None: region_in = torch.abs(torch.mean(y_pred * ((y_true - C_1)2)) ) # equ.(12) in the paper region_out = torch.abs(torch.mean((1. - y_pred) * ((y_true - C_2)*2))) # equ.(12) in the paper else: region_in = torch.abs(torch.sum(y_pred ((y_true - C_1)2)) ) # equ.(12) in the paper region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)*2))) # equ.(12) in the paper lambdaP = 5. # lambda parameter could be various. loss = lenth + lambdaP (region_in + region_out) return loss Thank you very much! And may I ask if I need sigmoid y_pred? And how do you set the value of learning rate and iteration? In my experiment, the output always like this I am very confused why ac loss itself do not work... — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

Gaojun211 commented 4 years ago

Hi Xu,

I had used AC Loss to try seg 2-class images. Although the loss is decreasing, dice score doesn't improve at all. On the contrary, dice score is at a fairly low value, about 0.0001.

` class ActiveContourLoss(Module): def init(self): super(ActiveContourLoss, self).init()

def forward(self, y_pred, y_true):

    x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 
    y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]

    delta_x = x[:,:,1:,:-2]**2
    delta_y = y[:,:,:-2,1:]**2
    delta_u = torch.abs(delta_x + delta_y)

    epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
    w = 1.

    lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

    C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()
    C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()

    region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
    region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper

    lambdaP = 5. # lambda parameter could be various.
    loss = lenth + lambdaP * (region_in + region_out) 

    return loss

`

this is my pytorch implemention, y_pred[:,0,:,:] what is the mean 0, is channels? and y_pred as the input need to sigmoid? the input shape is (channel, batchsize, H, W) or (batchsize, channel, H, W) . My y_pred shape is (16, 1, 512, 512).So i need modify it?

Best, qaqzzz

hello,bro.I have the same problem now that dice score is always 0.00000,and I have set learn rate to 0.00005.So I'm very confused about it.Could you give me some suggestions?Thank you very much.