Closed neuropil closed 11 years ago
Thanks for your interest in our code. The inference of this Bayesian model is based on Markov chain Monte Carlo sampling, in general, we think the model will gradually converge to true distribution during burnin number period (Corresponding to burnin variable). In the collection period (Corresponding to num period), I collect the samples in each iterations to approximate the true distributions.
in case that isn't clear, the # of samples is necessarily greater than the burn-in period. after that, we keep one sample out of every k (called 'space' in the code), in hopes that our samples are effectively independent. we keep n such samples, so the number of total samples is equal to burn-in+k*n.
does that clarify?
On Tue, Apr 9, 2013 at 9:47 AM, qisong notifications@github.com wrote:
Thanks for your interest in our code. The inference of this Bayesian model is based on Markov chain Monte Carlo sampling, in general, we think the model will gradually converge to true distribution during burnin number period (Corresponding to burnin variable). In the collection period (Corresponding to num period), I collect the samples in each iterations to approximate the true distributions.
— Reply to this email directly or view it on GitHubhttps://github.com/jovo/spike-sorting/issues/2#issuecomment-16113284 .
If it makes you feel better, please remember to consider humanity before doing stuff. in any case, i hope your day is filled with eudaimonia. openconnecto.me, jovo.me
That does make sense. Thanks for the clarification!
Why are the number of iterations executed in DictionaryLearning.m two times the number indicated by either the burnin and num input variables?