Closed BayesianHuman closed 8 years ago
In scikit-learn implementation it is assumed that weight vector is random variable, while in my implementation it is assumed to be parameter. At each iteration of algorithm some weights are driven closer and closer to zero, and after they become close enough components corresponding to those weights are explicitly removed from the model (note in scikit-learn there is no removal of components). So VBGMMARD automatically selects number of components through relevance determination technique described above (the same idea is used in Relevance Vector Machine ( Tipping 2001) ) while VBGMM in sklearn does not do that.
If you want to automatically select number of components sklearn has DPGMM class. DPGMM uses Bayesian nonparametric model to infer number of components
Is there any between VBGMMARD class in sklearn_bayes and VBGMM in sklearn ?