Closed naga-karthik closed 8 months ago
Hi,
a GMM is not necessarily trained, it all depends if you already know how to assign the classes. The EM algorithm you mention is a possible way to create classes of Gaussians, but here the classes are already defined by the values in the label map, and the parameters (mean, variance) of these Gaussians are randomised
So the image G is formed exactly as exposed in equations 11 to 13:
Note that the parameters of the K Gaussians change for each training example.
Hope this helps, Benjamin
Hello! Thank you for the amazing work! Really interesting approach using domain randomization for contrast/resolution agnostic brain segmentation !
I had a question about the GMM generative model. In the paper, it is mentioned in section 3.1.2 that an initial synthetic scan G is generated by sampling the GMM conditioned on the (deformed) label map. Now, in my (limited) understanding of GMMs, I know that they follow an expectation-maximization (EM) process to estimate the parameters of the model. I am unable to find any info about the EM algorithm (in the paper and code). Could you please provide some insights on how this GMM was trained? In fact, was it trained? I think I am missing something here.
Thank you!