Calling tdt.read_block() by default loads all data. This can be very slow if you just need some metadata and not all of the neural data. Relatedly, creating multiple TDTReader objects across classes will mean that all of the data gets loaded every time it is used, even if you only need the mark track.
This PR makes the TDTReader somewhat lazy, changes the order of things in the NWBBuilder so wait until the last step to load any neural/mark data (makes debugging faster if metadata steps fail), and compresses the timeseries data which makes files 50-80% of their original size (should also make saving/loading faster, but need to test that).
Checklist:
[ ] All tests pass on catscan: run pytest --basetemp=tmp -sv -n 8 tests on catscan from the root directory
[ ] If needed, docs have been update: docs/source has been updated for any added, moved, or removed files
[ ] Docs build with no errors: run make clean & make html from the docs folder
[ ] No python formatting errors: run flake8 nsds_lab_to_nwb tests from the root directory
Description and related issues
Calling
tdt.read_block()
by default loads all data. This can be very slow if you just need some metadata and not all of the neural data. Relatedly, creating multiple TDTReader objects across classes will mean that all of the data gets loaded every time it is used, even if you only need the mark track.This PR makes the
TDTReader
somewhat lazy, changes the order of things in the NWBBuilder so wait until the last step to load any neural/mark data (makes debugging faster if metadata steps fail), and compresses the timeseries data which makes files 50-80% of their original size (should also make saving/loading faster, but need to test that).Checklist:
pytest --basetemp=tmp -sv -n 8 tests
on catscan from the root directorydocs/source
has been updated for any added, moved, or removed filesmake clean & make html
from thedocs
folderflake8 nsds_lab_to_nwb tests
from the root directory