xuuuuuuchen / Active-Contour-Loss

Implementation of active contour loss function
MIT License
198 stars 34 forks source link

Is this really the implementation from the paper? #5

Closed achaiah closed 4 years ago

achaiah commented 5 years ago

For example:

C_2 = np.zeros((256, 256))
...
region_out = K.abs(K.sum( (1-y_pred[:,0,:,:]) * ((y_true[:,0,:,:] - C_2)**2) ))

So you're subtracting a tensor of all zeros from y_true in (y_true[:,0,:,:] - C_2)?

xuuuuuuchen commented 5 years ago

Yes.

aa1234241 commented 5 years ago

Subtracting zeros has no sense. C_1 = np.ones(256,256), I wonder why you don't just implement it as region_in = K.abs(K.sum(y_pred * ((y_true - 1) ** 2)))

achaiah commented 5 years ago

That's what I was wondering as well. Also, abs() here is not necessary as your values are already squared and y_pred is in [0-1] range.

ojedaa commented 4 years ago

Hi friends. If I'm working with Keras CoLab and I'm using image_data_format": "channels_last" is it enough to reorder the indixes to make it work? For example, using (B,M,N,D) x = y_pred[:,1:,:,:] - y_pred[:,:-1,:,:] instead of (B,D,M,N,) x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] ?

lc82111 commented 4 years ago

What is the value of y_true in region_in and region_out ? Is y_truth[region_in] == 1 and y_truth[region_out]== 0 ?

xuuuuuuchen commented 4 years ago

@achaiah @aa1234241 Sorry for the late reply. The reason why the loss function was expressed like that because we tried to make it is easy to be understood from the general AC equations. Of course, you can simplify the loss function as much as you can on your own experiments.

xuuuuuuchen commented 4 years ago

@lc82111 Hi. y_true is fixed all the time, namely your ground truth.

xuuuuuuchen commented 4 years ago

@ojedaa Hi friend, As long as you pre-define the tensor format already, it would be fine to do the indexing stuff.