I'm really sorry to disturb you again. Actually, I have already raised an issue in 'DiseaseProgressionModeling-HMM
', but I didn't get a reply. I wonder if you omitted it. Here is the original question:
Dear kseverso:
Hello, I am trying to reproduce the PD progress model, but have encountered some difficulties. I hope to get some help from you.
I followed the steps you preprocessed in "Discovery-of-PD-States-using-ML", processed the PPMI dataset (I made some changes due to the dataset changes), and then applied it to the PIOHMM. However, even if I set k=8 and hope to get 8 states, I only get 3-4 results in the training set, which are often in the time series t<5(T=31) I have already get the final state, for example, states 3 and 4. This situation becomes more and more significant with more iterations, and may even end up with only two states.
Besides, I may not know exactly what the model parameter learning steps mean. Are the ELBO and log_prob (self.ll) obtained at each iteration concepts similar to 'loss' in neural networks? According to my observation, ELBO was around 30,000 after the first iteration, then changed to around -10000 after the second iteration, and remained negative at -100000~-120000. Log_prob, on the other hand, keeps at about 110000, iterating at the learning rate of 1E-18, and fluctuates around 120000 or so when it reaches 20 times (convergence is impossible even using usE_CC convergence standard). I wonder how ELBO and log_prob change when you apply it into PPMI datasets? And what orders of magnitude are they?
Looking forward to your reply!
I'm really sorry to disturb you again. Actually, I have already raised an issue in 'DiseaseProgressionModeling-HMM ', but I didn't get a reply. I wonder if you omitted it. Here is the original question:
Dear kseverso: Hello, I am trying to reproduce the PD progress model, but have encountered some difficulties. I hope to get some help from you.
I followed the steps you preprocessed in "Discovery-of-PD-States-using-ML", processed the PPMI dataset (I made some changes due to the dataset changes), and then applied it to the PIOHMM. However, even if I set k=8 and hope to get 8 states, I only get 3-4 results in the training set, which are often in the time series t<5(T=31) I have already get the final state, for example, states 3 and 4. This situation becomes more and more significant with more iterations, and may even end up with only two states. Besides, I may not know exactly what the model parameter learning steps mean. Are the ELBO and log_prob (self.ll) obtained at each iteration concepts similar to 'loss' in neural networks? According to my observation, ELBO was around 30,000 after the first iteration, then changed to around -10000 after the second iteration, and remained negative at -100000~-120000. Log_prob, on the other hand, keeps at about 110000, iterating at the learning rate of 1E-18, and fluctuates around 120000 or so when it reaches 20 times (convergence is impossible even using usE_CC convergence standard). I wonder how ELBO and log_prob change when you apply it into PPMI datasets? And what orders of magnitude are they? Looking forward to your reply!