ikharitonov / vestibular_vr_pipeline

2 stars 0 forks source link

VestibularVR Analysis Pipeline

This is the general pipeline for loading, preprocessing, aligning, quality checking and applying basic analysis to the data recorded on the RPM (e.g. running) using HARP devices, eye movements data derived from SLEAP and neural data (fiber photometry, Neuropixels).

Installation

The code mainly relies on harp-python and aeon_mecha packages. The proposed setup is to first create an Anaconda environment for aeon_mecha, install it and then install harp-python inside of this same environment. Optional packages required by some of the example Jupyter notebooks, but not essential for the main pipeline, are cv2, ffmpeg.

Create anaconda environment and add it to jupyter

conda create -n aeon
conda activate aeon
conda install -c anaconda ipykernel
python3 -m ipykernel install --user --name=aeon

Install aeon_mecha

git clone https://github.com/SainsburyWellcomeCentre/aeon_mecha.git
cd aeon_mecha
python -m pip install -e .

In macOS, use conda install pip instead of the last line

Install harp-python

pip install harp-python

Install other packages

pip install lsq-ellipse
pip install h5py
pip install opencv-python

Repository contents

📜demo_pipeline.ipynb   -->   main example of pipeline usage and synchronisation
📜grab_figure.ipynb
📂harp_resources
 ┣ 📄utils.py   -->   functions for data loading
 ┣ 📄process.py   -->   functions for converting, resampling, padding, aligning, plotting data
 ┣ 📄h1-device.yml   -->   H1 manifest file
 ┗ 📄h2-device.yml   -->   H2 manifest file
 ┗ 📂notebooks
    ┣ 📜load_example.ipynb
    ┣ 📜demo_synchronisation.ipynb
    ┣ 📜Treshold_exploration_Hilde.ipynb
    ┣ 📜comparing_clocked_nonclocked_data.ipynb
    ┗ 📜prepare_playback_file.ipynb
📂sleap
 ┣ 📄load_and_process.py   -->   main functions for SLEAP preprocessing pipeline
 ┣ 📄add_avi_visuals.py   -->   overlaying SLEAP points on top of the video and saving as a new one for visual inspection
 ┣ 📄horizontal_flip_script.py   -->   flipping avi videos horizontally using OpenCV
 ┣ 📄registration.py   -->   attempt at applying registration from CaImAn to get rid of motion artifacts (https://github.com/flatironinstitute/CaImAn/blob/main/demos/notebooks/demo_multisession_registration.ipynb)
 ┣ 📄upscaling.py   -->   attempt at applying LANCZOS upsampling to avi videos using OpenCV to minimise SLEAP jitter
 ┗ 📂notebooks
    ┣ 📜batch_analysis.ipynb
    ┣ 📜ellipse_analysis.ipynb   -->   visualising SLEAP preprocessing outputs
    ┣ 📜jitter.ipynb   -->   quantifying jitter inherent to SLEAP
    ┣ 📜light_reflection_motion_correction.ipynb   -->   segmentation of light reflection in the eye using OpenCV (unused)
    ┣ 📜saccades_analysis.ipynb   -->   step by step SLEAP data preprocessing (now inside of load_and_process.py + initial saccade detection
    ┗ 📜upsampling_jitter_analysis.ipynb   -->   loading SLEAP outputs from LANCZOS upsampling tests

Conventions

Saving SLEAP outputs:

When exporting SLEAP inference outputs (in SLEAP window >> File >> Export Analysis CSV >> Current Video), save the file in the same directory as the analysed video (has to be manually located) under following naming convention:

e.g. _VideoData21904-01-14T04-00-00.sleap.csv

Functions available

HARP Resources

utils.py:

process.py:

SLEAP

load_and_process.py: