Closed yuehhua closed 3 years ago
Hello,
likelihoods are in a way unscaled probabilities, and here we have continuous densities of the features. So the p(x_i | gauss_j)
really is a probability density (because x_i
are continuous), this is not bounded by 1. If you have small variance this will happen. I think with many Gaussians you always tend to have small variances (if the feature space is somewhat normalized) because you'll have a Gaussian covering a few data points that are close together.
Hope this explains things a little
Thank you for your explanation. I will check it in my data.
Thank you for making this great package. I used it in my project and I want to calculate likelihood for my own purpose.
I follow your documentation for calculating likelihood
llpg(gmm::GMM, x::Matrix)
, but I got all positive values. I am confused about the outcome ofllpg(gmm::GMM, x::Matrix)
. As document mentioned in README, it returnsll_ij = log p(x_i | gauss_j)
, the Log Likelihood Per Gaussianj
given data pointx_i
. In theory, a log likelihood are all negative values, instead of positive values, while negative log likelihood are all positive. I am confused with these outcomes. Could you explain it clearly?