magland / figurl-franklab-views

0 stars 0 forks source link

Modify 1D and 2D decode visualizations to use a predefined set of position bins rather than inferring from data #19

Closed edeno closed 1 year ago

edeno commented 1 year ago

It would be nice to have the valid on track positions be discovered from the Environment class in my code (or rather, can be pre-specified by an array) rather than inferred from the positions given.

The reason for this is that sometimes the dataset for the decoding model will be smaller than the dataset for the encoding model, so not all the valid positions will exist in the data.

jsoules commented 1 year ago

Hi Eric, Do you have an example dataset where this is the case? Just looking for an instance to work against. Thanks!

edeno commented 1 year ago

I think you could take any dataset you have and just take the first 100 time bins. In this case, the full path the animal has traversed will not be displayed (because it will be inferred from the first 100 time bins only).

jsoules commented 1 year ago

Here's the current state of 2d track animation processing. (Haven't yet dug into the situation for 1d.)

Front-end: The relevant components live in this repository, as test-gui/src/package/view-track-position-animation/*.

Back-end: It appears that the processing code is no longer maintained on our side: I can't find the preprocessing scripts that would turn position/decoding source data into a TrackAnimationStaticData JSON object in FI-side repositories. Instead it looks like this now lives within spyglass, specifically LorenFrankLab/spyglass/src/spyglass/decoding/visualization.py and .../decoding/visualization_2D_view.py.

At present, it appears that the minimally invasive changes required would be to modify the processing code, in two areas:

For implementing the first part, I'll need additional input from @edeno on how the Environment class represents track geometries, as well as an example populated Environment for the Chimi 2020 data (I do have one pickled, but it doesn't have track geometry populated already, so if we can run through the appropriate steps to populate that should be sufficient).

For implementing the second part, I'll need to confirm some assumptions and then see how good the match is between my example decoded data and observed data; we may need to have a conversation about what to do in the event the scales don't match or the given decoded bins don't align readily with the observed-position bins.

This is the minimally-invasive solution; as best as I can tell, the front-end code does represent (potentially separate) sets of position buckets for decode vs actual track position. If we are certain that these can never differ in size or location, we could refactor to avoid this possibility; it would simplify the ultimate implementation, but would remove flexibility for the future and would mean a larger set of changes (which would be also breaking for currently-existing figurls).

jsoules commented 1 year ago

2D case addressed by Spyglass PR #642 (https://github.com/LorenFrankLab/spyglass/pull/642)

jsoules commented 1 year ago

Talked with Eric--I don't think there actually is a parallel issue for the 1D case, so I think this issue is resolved by Spyglass PR #642 (https://github.com/LorenFrankLab/spyglass/pull/642)