Closed luciaL closed 4 years ago
Hi,
I have added both dropout and perturbation to inputs when calculating the uncertainty.
Oh, Could you please tell me the line of calculating the uncertainty? Thank you so much for your reply. Best wishes.
The above code you mentioned is to calculate the uncertainty. Adding different perturbation to the inputs and calculate the entropy of the average prediction. Besides calculating entropy, you can also calculate the variance.
Okay, got it. Thank you so much.
Hello, @yulequan @luciaL Why need to repeat, this step I do not understand.
Hello, @yulequan
I read your code carefully and uncertainty is a new idea for me. I see your implementation of Monte Carlo Dropout like below:
T = 8 volume_batch_r = unlabeled_volume_batch.repeat(2, 1, 1, 1, 1) stride = volume_batch_r.shape[0] // 2 preds = torch.zeros([stride * T, 2, 112, 112, 80]).cuda() for i in range(T//2): ema_inputs = volume_batch_r + torch.clamp(torch.randn_like(volume_batch_r) * 0.1, -0.2, 0.2) with torch.no_grad(): preds[2 * stride * i:2 * stride * (i + 1)] = ema_model(ema_inputs) preds = F.softmax(preds, dim=1) preds = preds.reshape(T, stride, 2, 112, 112, 80) preds = torch.mean(preds, dim=0) #(batch, 2, 112,112,80) uncertainty = -1.0*torch.sum(preds*torch.log(preds + 1e-6), dim=1, keepdim=True)
I wonder if this is the most common way to implement uncertainty and mean teacher method? I think your implementation is to add perturbation to inputs but not dropout to network although both way is to add regularization.