In another issue, you said that the input shape in AC loss is [Batch, 1, W, H].
Dose it mean that the y_pred is an output of softmax layer followed by an argmax operator or a pick operator to pick out the probability of fg(or bg)?
If the shape of y_pred is [Batch, 1, W, H], why do you pick the first channel (y_pred[:, 0, :, :]) when you implement the Eq[12] but not do so when you implement the Eq[11] ?
y_pred is an output of the sigmoid layer because it is a 2-class segmentation problem.
It depends on how you pre-define the shape C_1/_2. If the shape of C_1/_2 was firstly defined as np.ones/zeros((256, 256)), y_pred must have a same size like [:, 0, :, :]. You could use y_pred directly without slicing out the 1st channel, if so, C_1/_2 need to be pre-defined as np.ones/zeros((1, 256, 256)).
Hi @xuuuuuuchen
In another issue, you said that the input shape in AC loss is [Batch, 1, W, H].
Dose it mean that the y_pred is an output of softmax layer followed by an argmax operator or a pick operator to pick out the probability of fg(or bg)?
If the shape of y_pred is [Batch, 1, W, H], why do you pick the first channel (y_pred[:, 0, :, :]) when you implement the Eq[12] but not do so when you implement the Eq[11] ?