Open Magic-Ludo opened 1 month ago
Hi!
Thanks for writing in. The first thing I would try, since you seem to be inserting these slices in an XY plane, is to set the chunk size to be [1024,1024,1], currently it is a single pixel in the x direction. Second, maybe try doing a transpose instead of a reshape? slice_data.T
You can then do a np.squeeze. CloudVolume expects 4D arrays so add two trivial dimensions to the end:
slice_data = np.squeeze(slice_data.T)
while slice_data.ndim < 4:
slice_data = slice_data[..., np.new_axis]
Let me know if this helps.
I have 3620 images in
.tif
format, totalling 1TB.Here's their info:
I'm looking to use Neuroglancer to visualize this dataset. To achieve this, I have converted all the
.tif
files to.h5
format with the following script:The
.h5
file generated is very large (900 GB) and therefore cannot be loaded into RAM. From what I've read, you can pre-compute the contents of this file to display it in chunks on-demand with CloudVolume. I tried this with the following script:But on the way out I find myself with a size incompatibility problem:
I've tried changing dimensions or reversing the valuers for each slice, but I always end up with a problem like this...
Thanks for your help!