Closed Dream-High closed 5 months ago
Hi! Thanks for your interest in our paper! For BRATS, I didn't tune for the hyperparameters of mu and sigma and I simply used mu as 0.5. But I did tune the mu for another dataset of lung vessel segmentation, interestingly I empirally found when mu is 0.4 it works better for that dataset. That's why I suggest in the paper that mu might be an important hyperparameter. My experience is, for binary segmentation cases, our Bayesian pseudo labels works better than the pseudo labels in the early stage of the training and it can at least speed up the training, this is because in the later training stage, the model will become too confident. Hope my answer helps!
But in general I suggest to start with mu as 0.5 and fix sigma.
This paper is a great work for pseudo label. I have a question about the initial mu and sigma. Could you please tell me more details about empirically chosen for different datasets. e.g. BRATS mu=0.5, sigma=0.1
Hi I have now updated the model and added a new implementation of K-L loss without manual searching of the prior of mean. However, I haven't fully tested it yet but it is in libs.Loss.kld_loss now. I will update the github after I finish more experiments. Hope it helps!
This paper is a great work for pseudo label. I have a question about the initial mu and sigma. Could you please tell me more details about empirically chosen for different datasets. e.g. BRATS mu=0.5, sigma=0.1