Closed sadransh closed 3 years ago
Hi Sadra, thanks for the question. sequitur
supports what you're asking for, and you'll probably want to use the LSTM autoencoder (LSTM_AE
). If you want to use quick_train
, you'll first have to create a training set. Do this by creating a list of tensors, each with shape [sequence_length, num_signals]
. Then plug in your training set and the model into quick_train
, like so:
from sequitur.models import LSTM_AE
from sequitur import quick_train
train_set = ... # List of tensors, each with shape [sequence_length, num_signals]
encoding_dim = ... # Whatever you want the vector encoding size to be
encoder, decoder, _, _ = quick_train(LSTM_AE, train_set, encoding_dim=encoding_dim)
Please let me know if you have any further questions!
Oh and check out https://projector.tensorflow.org/ if you want to visualize the latent space produced by the autoencoder.
@sadransh Please let me know if this solves your issue so I can close it. Thanks!
Thanks a lot. tbh in the meanwhile, I switched to Keras. However, the web-based projector you mentioned was really helpful to me. ( due to buggy behavior embedded projector for my case. )
As I review your readme I think it is clear that your library is able to do such. However, it was not clear to me as I was a beginner at the time I saw your work.
Thanks a lot for your help.
I might be able to create a notebook based on the HAR dataset using your library as a tutorial.
Are you willing to accept such a pull request?
Glad I could help. And yes I'd be happy to accept that PR.
Thanks for sharing this nice library!
I wonder if it is possible to use your work for encoding multi signals.
I have 4 sensors where each gives 400 samples per instance ( time series, an equal number of samples.) So each of my objects has 4*400 samples ( and this have one label) . As an example, you can consider a dataset like the one used in this work: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
So in this, there is
I would like to feed an autoencoder similar to the one done in the above-mentioned repo. So, the input tensor looks like this
(7500, 128,9) => ( nb_of_objects, length of each sequence, nb_of_input_signals)
in like this :( this is object 0 of 7500 objects of train set)
Could you please tell me if it is possible to encode such signals using your work? If yes, please give me a little hint on how to do so.
In addition, I don't see any latent space visualization, Do you suggest any specific library to use for such a task?