Closed nicholasturner1 closed 7 years ago
Should be all set now, though again, I'm not sure if this works with your other backends as is.
@nicholasturner1 I am testing with pinky40_v8/image/4_4_40
, the offset make the bigarray not working for this dataset!
julia> ba[24065:24576, 10241:10752, 1:64]; │···································································
string(chunkGlobalRange) = "23636-24148_9764-10276_0-64" │···································································
typeof(e) = BigArrays.NoSuchKeyException │···································································
no suck key in kvstore: BigArrays.NoSuchKeyException(), will fill this block as zeros │···································································
string(chunkGlobalRange) = "24148-24660_9764-10276_0-64" │···································································
typeof(e) = BigArrays.NoSuchKeyException │···································································
no suck key in kvstore: BigArrays.NoSuchKeyException(), will fill this block as zeros │···································································
string(chunkGlobalRange) = "23636-24148_10276-10788_0-64" │···································································
typeof(e) = BigArrays.NoSuchKeyException │···································································
no suck key in kvstore: BigArrays.NoSuchKeyException(), will fill this block as zeros │···································································
string(chunkGlobalRange) = "24148-24660_10276-10788_0-64" │···································································
typeof(e) = BigArrays.NoSuchKeyException │···································································
no suck key in kvstore: BigArrays.NoSuchKeyException(), will fill this block as zeros │···································································
│···································································
julia> 24065-23636 │···································································
429 │···································································
│···································································
julia> 10241-9764 │···································································
477
also the coordinate change also do not match the offset in the info
file.
{"num_channels": 1, "type": "image", "data_type": "uint8", "scales": [{"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "4_4_40", "resolution": [4, 4, 40], "voxel_offset": [10752, 4608, 0], "size": [53760, 38912, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "8_8_40", "resolution": [8, 8, 40], "voxel_offset": [5376, 2304, 0], "size": [26880, 19456, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "16_16_40", "resolution": [16, 16, 40], "voxel_offset": [2688, 1152, 0], "size": [13440, 9728, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "32_32_40", "resolution": [32, 32, 40], "voxel_offset": [1344, 576, 0], "size": [6720, 4864, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "64_64_40", "resolution": [64, 64, 40], "voxel_offset": [672, 288, 0], "size": [3360, 2432, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "128_128_40", "resolution": [128, 128, 40], "voxel_offset": [336, 144, 0], "size": [1680, 1216, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "256_256_40", "resolution": [256, 256, 40], "voxel_offset": [168, 72, 0], "size": [840, 608, 1024]}, {"encoding": "raw", "chunk_sizes": [[512, 512, 64]], "key": "512_512_40", "resolution": [512, 512, 40], "voxel_offset": [84, 36, 0], "size": [420, 304, 1024]}]}
That exact example works fine for me in the offsets branch - but I'll look into whether it didn't mix well with something in the master branch.
this do not work well with other bigarrays in s3. The meaning of offset is different with our understanding here. The file name of my current chunks indicate the real global offset, not relative compared with the offset in info file!
I have shutdown the function with this commit: https://github.com/seung-lab/BigArrays.jl/commit/1255075d70909c8f63652367b187b563e6f94b65
Hm... I don't think I understand your distinction, but I did explain what I meant by offset when we talked about doing this in the first place. We should review this when we can.
We've made some progress with this offline.
Pulling chunks with the code from the S3Dicts offsets branch and BigArrays offsets branch seems to work. There can be an issue if you specify single indices instead of ranges (e.g. 4 instead of 4:4), but that problem affects both the 'offsets' branch and the master branch.
@jingpengwu found that CartesianRange behaves really differently for single indices and ranges. e.g.
CartesianRange((1,1,50)) => CartesianRange{CartesianIndex{3}}(CartesianIndex{3}((1,1,1)),CartesianIndex{3}((1,1,50)))
while
CartesianRange((1,1,50:50)) => CartesianRange{CartesianIndex{3}}(CartesianIndex{3}((1,1,50)),CartesianIndex{3}((1,1,50)))
The original issue above is still a mystery, since all of its indices are ranges. I haven't been able to reproduce that error though.
just fixed the single element indexing issue: https://github.com/seung-lab/BigArrays.jl/commit/dfb3b8b2ef01b68ca8adbb6a964cae880e7d9bdd
The way of incorporating offsets into the BigArray fields is a bit clunky - I don't know if it will work for all of the backends. This seems to do the job for now though.