Closed huangzj421 closed 1 year ago
hi @huangzj421 position is continuous, whereas direction is a 8 state classification task, and active or passive is 2; that is why we use accuracy vs. R^2.
Can you otherwise expand on what you mean by "strange?" and provide your example. You can see the figure is here: https://cebra.ai/docs/cebra-figures/figures/Figure3.html#Figure-3h
hi @huangzj421 seems you are using the wrong dataest,
@jinhl9 points out that you comment you used the dataset 'area2-bump-posdir-active-passive' but in that dataset, discrete index is '0-15' accounting for 8 directions X 2 trial types (active,passive). You should use the dataset 'area2-bump-pos-active-passive', which use the trial types(active, passive) for the discrete index.
Yes thank you @jinhl9 and @MMathisLab. I have fixed it for the third subplot. But the acc of the second subplot is still higher than that in the original paper. I don't know why.
Can you share a minimal example? But if it's slightly better I'm not so concerned;). So I'll close as an issue as the correct data solved it - but please do post back.
Thank you for your excellent work!
I came across a problem while reading the article, considering that the first subgraph in Figure3 is CEBRA-Behavior trained with (x,y) position, how is the position decoding done. Referring to other setups, the decoding pipeline I can think of is: the inputs are spike and position, the output of cebra is conditional embedding, and the poition decoding is done based on such conditional embedding. I'm not sure if that's the real implementation, and this part of the decoding code is not in https://cebra.ai/docs/demo_notebooks/Demo_primate_reaching.html. If the decoding process is like this, I don't think it is reasonable. The ideal position decoding should be based on the embedding of spike or spike condition on t.
Please let me know the specific ideas or implementation of position decoding, thank you!
Hi, I am new to CEBRA. When I reproduced the training and test procedures of Figure 3h decoding results in your nature paper "Learnable Latent Embeddings for Joint Behavioral and Neural Analysis" based on the demo decoding, the Direction(active) and Active/passive subplots are strange. Specifically, I used 'area2-bump-target-active' and 'area2-bump-posdir-active-passive' as the training datasets. Both labels are their discrete_index. And the final metrics for the KNN Decoder are sklearn.metrics.accuracy_score since I noticed the y-axis is Acc.(%) different from R2. Am I doing something wrong?