LINCellularNeuroscience / VAME

Variational Animal Motion Embedding - A tool for time series embedding and clustering
GNU General Public License v3.0
164 stars 58 forks source link

Code for MotionMapper and MoSeq #91

Closed shreyask3107 closed 1 year ago

shreyask3107 commented 1 year ago

Hi Team,

Your paper even compares VAME with MotionMapper and MoSeq. Could you share the codes applied to your dataset and the plots generated for comparison?

Thanks!

kvnlxm commented 1 year ago

Hi,

for MotionMapper we used the implementation on their corresponding GitHub. The MoSeq code is available here. You need to first sign the EULA. There is also a good implementation by the Linderman Lab if you are purely interested in their AR-HMM model. We had a previous version of MoSeq for which we signed an MTA and therefore can't share this code.

shreyask3107 commented 1 year ago

Thanks for sharing this.

Would it be possible to share with me the code of motionmapper trained on pose files? It will save a good chunk of my time.

I am also a bit confused on how AR-HMM was used in your paper with different video files. As in to maintain similar labels across multiple videos.

Thanks!

kvnlxm commented 1 year ago

Maybe this repository will help you with saving time and running MotionMapper a bit easier. Our code was really the implementation given by the original GitHub. We state the parameter used in our preprint.

About the AR-HMM, you have to concatenate the pose trajectory files (like with the segmentation of VAME) and run the inference of the AR-HMM. This is the same way as they do it with the original MoSeq.

Hope this helps!

shreyask3107 commented 1 year ago

Thank You. This is very helpful.

I still have a couple of doubts:

  1. Although, if the pose trajectory files are concatenated, wouldn't it hinder the performance of VAME? VAME prediction for a time step depends on poses for previous time steps. Concatenating would result in the model relying on poses of final time steps of a different video file. Could you please help me understand this?

  2. Did you simply apply PCA to poses before segmenting with AR-HMM? I am trying to figure out how can we utilize the poses for AR-HMM (MoSeq)

Thanks for your speedy replies.

kvnlxm commented 1 year ago
  1. In simple terms, the VAME model works as follows: The RNN model receives random trajectory samples (in our paper we used a time window of 30 frames). From these trajectory samples we learn an embedding or latent space. Once the model is trained, we create for every frame a latent vector representation during inference. Once we have all the latent vectors for all the videos, we simply concatenate them and run a Hidden-Markov-Model to infer the motifs. In this last step, VAME is similar to the AR-HMM. The power of VAME lies in the fact that the RNN captures very well the dynamics of the input trajectory in its latent space and hence, the HMM has an easy time finding motif states.

  2. For the AR-HMM, we fed the original but egocentrically aligned poses to the model. You could of course also run PCA first and see if this improves the capabilities of the AR-HMM. Otherwise, you could first infer the latent space of the trajectories with VAME (as a non-linear model) and then run the AR-HMM on top of this. This is for sure an interesting approach combing two very powerful methods.

You are welcome!

kvnlxm commented 1 year ago

Will close this for now.