guillaume-chevalier / LSTM-Human-Activity-Recognition

Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
MIT License
3.33k stars 935 forks source link

Big amount of time steps (3072) #13

Closed ron-weiner closed 6 years ago

ron-weiner commented 6 years ago

Hey, Is there a way to handle a big amount of time steps (3072) instead of 128 with minor changes?

Thanks

guillaume-chevalier commented 6 years ago

There are 128 time steps, not 128 features. If you want to change that number, it is on this line of code: n_steps = len(X_train[0]) Changing the dataset should change the neural net's architecture automatically. However, You may need to make the neural net bigger and change other parameters too.

ron-weiner commented 6 years ago

Thanks for your reply! I meant 3072 time steps. I understood where it is calculated and it didn't raise any error, but I don't think it can run that way in our lifetime.. I couldn't even finish building it.

I wish to use your code on some eeg data, I have 9 channels, 3 output classes and 3072 time steps in my case but I don't think it is necessary to connect between ALL their cells like you have done in your work, I tried to understand how it is possible to focus on shorter amount of time steps, like 300, (like a time window) that will progress until it will reach total time steps.

Any idea how it can be implemented ? Or do you maybe have a better way?

Thank you

guillaume-chevalier commented 6 years ago

LSTM is bad for long time series. It is still about short-term memory, despite it has one longer than an RNN. You may want to check out PLSTMs or use convs+pooling to shorten your signal before an RNN.

ron-weiner commented 6 years ago

thank you very much i will check those!

You have done a great job, applause :)