This is the entire model with the gmm fitted to deep representation, and no encouragement of gaussianity. Sigh...
Get model retrained on the special binary dataset(#21). Will get train three separate instances, with three different aggregators, and I have to complete #16 real quick. Once that is complete gather the MLP accuracy for classifying mdivi vs LLO, and infer logits/gmm representations for MDIVI, LLO, and Control. I'll have to make some pretty plots (#23 ) and simple classifiers (#24 ) to write up the results.
If the final plots are nothing, try again with encouraging gaussianity (will not finish by march 26).
This is the entire model with the gmm fitted to deep representation, and no encouragement of gaussianity. Sigh...
Get model retrained on the special binary dataset(#21). Will get train three separate instances, with three different aggregators, and I have to complete #16 real quick. Once that is complete gather the MLP accuracy for classifying mdivi vs LLO, and infer logits/gmm representations for MDIVI, LLO, and Control. I'll have to make some pretty plots (#23 ) and simple classifiers (#24 ) to write up the results.
If the final plots are nothing, try again with encouraging gaussianity (will not finish by march 26).