Closed HilbertHuangHitomi closed 3 years ago
You did not misunderstand - in this case, no representations are smaller-dim than input. The actual need to compress to a lower-dimensional structure might be not relevant here because 1. the transformer isn't too similar to LVMs in the traditional sense, and 2. the need to compress to lower-dimensions in deep LVMs isn't too obvious and an open question (at least for me).
Thank you for the response!
Thanks for sharing the code!
I would like to reuse the code in our experiments. However, I got a bit confused when I double-check the code. As I understand it, we use latent variable models to inference the lower dimension trajectories given the observed neural data. I have trouble determining the dimension of the latent variables when I conduct my experiments, and therefore I refer to your code for some bits of help. In
./src/model
line 151, I findIt seems that during all experiments of the code
num_input
is always larger thannum_neurons
. Thus I come to be confused about this. In the worse case, the encoder or decoder only needs to be an identity function, which is a trivial solution. I think that the most important contribution of your work is introducing Transformer architectures into neural data analysis to significantly accelerate the inference. The work may also focus on the accuracy of reconstructing firing patterns. Might it deviate from our original objective of LVM to find a lower dimensional structure when you use the settings? Or did I miss something and misunderstand it?