Open wq-wust opened 1 year ago
Hi,
Training a GMM on a given dataset and having good results depends a lot on the initial locations of the Gaussians that constitute that GMM. Randomly initializing the position of the Gaussians and directly training a GMM will often give you poor results because the training algorithm is an Expectation-Maximization (EM) algorithm, which always guarantee you an improvement at each time step. These types of algos always depend on how good your initialization is.
There are different ways to initialize GMMs. One way is to correctly manually place each Gaussian of the GMM, where each mean represents the center position of each Gaussian, and each covariance represents its shape (i.e. an ellipsoid where its axes are aligned with the eigenvectors of the covariance matrix).
A more automatic way is to randomly place these Gaussians and call a clustering algorithm like k-means
to have a better idea where should the center position of each Gaussian be placed, and only then train the GMM.
In both examples (gmr.py
and gmr_letter.py
), Gaussians are first randomly initialized, then the function gmm.init(X, method=init_method)
(with init_method = 'k-means'
) is called to move these Gaussians. Then, the GMM is trained using gmm.fit(...)
.
Thank you very much for your explanation of initializing GMM. I have tried to train my data with the same initializing method, but errors were reported during the process. May I ask how this problem is caused? raise ValueError("The given covariance matrix is not symmetric") ValueError: The given covariance matrix is not symmetric
Can you provide the example you are running or the logs? The covariance matrix of a multivariate Gaussian distribution should always be symmetric, i.e. $\Sigma = \Sigma^T$. The error was raised because a non-symmetric matrix was trying to be set as a covariance matrix.
Of course, what I'm running is this program, Desktop.zip
When I run gmr.py and gmrletter.py, the initialization of GMM is different, I don't understand the initialization of gaussians= (mean,cov); gmm = GMM(gaussians=[Gaussian(mean=np.random.uniform(-1., 1., size=dim), covariance=0.1*np.identity(dim)) for in range(numcomponents)]) gmm = GMM(gaussians=[Gaussian(mean=np.concatenate((np.random.uniform(0, 2., size=1), np.random.uniform(-8., 8., size=dim-1))), covariance=0.1*np.identity(dim)) for in range(num_components)])
When I tried to train the position data of the end-effector with gmm, the output result was completely wrong. If possible, I hope you can write an example, thank you!