Open 003084-K opened 1 year ago
Thanks for trying the code! We applied time warping to LFP data in the time domain in our paper, but I haven't tried it on spectrograms. I think it should work in principle, but it is bizarre to me that your loss values are so low to begin with. How are the data normalized? The loss is simply the mean-squared-error of the trial average minus the single-trial activity, so my only thought is that the data are on a very small scale.
Are the heatmaps you are showing above single-trial activity? It looks by eye like there is a ton of trial-to-trial variability (i.e. very different patterns appearing on each trial) so perhaps the time warping code is struggling to find a single template that describes everything.
I see that you've looked at MultiShiftWarping
and this might be the way you need to go. The preprint associated with this is here: https://www.biorxiv.org/content/10.1101/2020.03.02.974014v3
While that preprint sketches out some ideas, I haven't pursued them very deep in practice. Good luck! :slightly_smiling_face:
Hi Dr. Williams,
First of all thank you so much for sharing all of these resources openly online. I've already learned so much just by going through your implementations, and your code is so nicely documented.
I have a motor sequence learning dataset in which I record LFP and ECoG data from patients with movement disorders. They learn two different typed sequences (S1 and S2). Each time a fixation cross appears, they type one of the two sequences. I'm hoping to use frequency-domain neural activity during the reaction time period to predict which sequence the patient is about to type using a simple classifier. The total reaction time is highly variable, so I am thinking about trimming the data to the 200ms right before movement onset across all trials, and then apply time warping within that window. I'd follow that with TCA or some other dim reduction before using these as part of my feature vector before feature selection and the running through a simple classifier.
When I apply your TCA code on the (trimmed) raw spectral data just to make sure I am doing things correctly, the results seem to make sense when I compare to some of the basic trends in the data (though it doesn't visibly distinguish between S1 and S2 in the across trial factors unfortunately). S1 is purple dots, S2 is yellow dots.
When I try to apply the piecewise warping example code, the loss is extremely small, and I'm not sure if it makes any sense..
Similarly, when I to do a hyperparameter search, the loss is extremely small. However, in the hyperparameter search, there also seems to be no change in loss across iterations, and the results from every random sample draw per fold seem to be identical (I plotted all loss histories for all hyperparameter samples for all models below- the lines for the same models are just overlapping).
Do you have any idea what I am doing wrong? Is it inappropriate to use these functions on spectral neural data? Any suggestions for alternative methods or change to my overall approach?
code snippet I've been using for piecewise below