Closed achaiah closed 4 years ago
Yes.
Subtracting zeros has no sense. C_1 = np.ones(256,256), I wonder why you don't just implement it as region_in = K.abs(K.sum(y_pred * ((y_true - 1) ** 2)))
That's what I was wondering as well. Also, abs() here is not necessary as your values are already squared and y_pred is in [0-1] range.
Hi friends. If I'm working with Keras CoLab and I'm using image_data_format": "channels_last" is it enough to reorder the indixes to make it work? For example, using (B,M,N,D) x = y_pred[:,1:,:,:] - y_pred[:,:-1,:,:] instead of (B,D,M,N,) x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] ?
What is the value of y_true in region_in and region_out ? Is y_truth[region_in] == 1 and y_truth[region_out]== 0 ?
@achaiah @aa1234241 Sorry for the late reply. The reason why the loss function was expressed like that because we tried to make it is easy to be understood from the general AC equations. Of course, you can simplify the loss function as much as you can on your own experiments.
@lc82111 Hi. y_true is fixed all the time, namely your ground truth.
@ojedaa Hi friend, As long as you pre-define the tensor format already, it would be fine to do the indexing stuff.
For example:
So you're subtracting a tensor of all zeros from
y_true
in(y_true[:,0,:,:] - C_2)
?