The Challenge data will eventually be in brain-map.org. So, mimic that API for fetching.
Not sure how to get image stack out of allensdk. Yeah, out of RMA RESTful but not the SDK. So, this may be a Reconstrue thing: one way to ask for image stack and swc that spans both Wasabi and brain-map.org
[ ] PR that back to The Allen i.e. SDK /should/ have a way to get raw image stack by cell ID, in friendly Python.
ctc = CellTypesCache(manifest_file='cell_types/manifest.json')
# a list of cell metadata for cells with reconstructions, download if necessary
cells = ctc.get_cells(require_reconstruction=True)
# open the electrophysiology data of one cell, download if necessary
data_set = ctc.get_ephys_data(cells[0]['id'])
# read the reconstruction, download if necessary
reconstruction = ctc.get_reconstruction(cells[0]['id'])
So, then the PR would add:
image_stack = ctc.get_image_stack(some_cell_id)
Then, can run the visualizers on Allen's raw data.
The Challenge data will eventually be in brain-map.org. So, mimic that API for fetching.
Not sure how to get image stack out of allensdk. Yeah, out of RMA RESTful but not the SDK. So, this may be a Reconstrue thing: one way to ask for image stack and swc that spans both Wasabi and brain-map.org
From the Cell Types docs:
So, then the PR would add:
Then, can run the visualizers on Allen's raw data.
Also, see #50.