ohjay / hmm_activity_recognition

Human activity recognition with HMMs
3 stars 1 forks source link

gmmhmm cannot work #1

Open LINYANWANG opened 5 years ago

LINYANWANG commented 5 years ago

If option 'm_type' was set as 'gmm', the training will not converge. While 'GaussianHMM' can work and the training can converge.

ohjay commented 5 years ago

Hmm, I'm not sure how to reproduce this. Are you running one of the provided config files? With Python 2.7, hmmlearn 0.2.0, and the owen.yaml config file, my models converge.

ohjay commented 5 years ago

Edit: the question above appears to have been deleted. For anyone wondering, the commenter was asking whether they could use Python 3.6 instead of Python 2.7.

I've only tried Python 2.7, so I guess that's the only version I can endorse. Maybe Python 3.6 forces you to use different versions of the modules and they don't behave the same in all regards.

ohjay commented 5 years ago

(If time permits, I'll test the code with Python 3.6 later this week and get back to you.)

fbiying87 commented 5 years ago

(If time permits, I'll test the code with Python 3.6 later this week and get back to you.)

Thanks for your reply. I tested it on Python 3.6. It seems to work for feature extraction and model building. But my current problem is, that some of the models like e.g. walking returns always nan as score. Therefore after sort, it always return 0 for this activity. Is it maybe possible to share your model with me? I want to reproduce the accuracy you got. It should be around 50% for each activity right?

Thanks!

ohjay commented 5 years ago

Sure, here's a ZIP file with some pre-trained models. If you extract the contents of the ZIP file in the project's root directory, then you can evaluate the models' performance with

python main.py classify config/quickstart.yaml

Using these models, I observe classification accuracies ranging from 58% to 80% on my validation split (the one generated by ./get_data.sh).

ohjay commented 5 years ago

Hey @fbiying87, I managed to run a couple of tests. Python 3.6 seems to work as long as you use the exact versions of the modules that are specified in requirements.txt. It's when you switch those up that things get a little iffy. After I upgraded hmmlearn and scikit-learn to their latest versions, I started seeing NaN warnings, so there may be numerical issues somewhere.

For the time being, I recommend you just set up a virtual environment with the supported dependency versions. quickstart.sh might help you get started with that.

fbiying87 commented 5 years ago

Hi @ohjay , thanks a lot for your reply. I figured it would be something like this. I used the latest hmmlearn and opencv version. Some models were droped due to NaN predictions. That's why the counts of the models were different for each activity. I will try to use the equal version of the requirements.txt to reproduce the results. Thanks again.