Closed austinhoag closed 2 years ago
Thank you for reporting Austin! I was able to reproduce this. The error is emitted from tinybrain here: https://github.com/seung-lab/tinybrain/blob/master/tinybrain/accelerated.pyx#L79-L80 I'll see if I can isolate this further and figure out what's going on.
Looks like at the edge of the image the chunk is is clipped. Shape: (256, 2, 64, 1) This would trigger the above logic in tinybrain. Need to think about how to resolve this in a cleaner way, but one easy way is to simply extend the size of the base image so that the edge isn't so small.
Okay, I think I see what needs to be done.
a. tinybrain has a slight bug in the above logic. It should be <
not <=
. That alone would fix this case.
b. There would still be weird general cases where you want to downsample a 1px strip 5 levels. I think in that case, the pixel should just be repeated up all the levels. Maybe that should be behind a flag?
Hi Austin! It took me a bit to get the issue in tinybrain fixed b/c the build system was creaky. tinybrain 1.2.2 will be deployed shortly and will fix your immediate problem.
Hi Will, thanks for looking into this so promptly. That solved the issue for the mip=0 downsamples, but the same error arose on the second iteration in my for loop, i.e. for making downsamples from mip=1. Here is the updated info file:
{
"data_type": "uint16",
"num_channels": 1,
"scales": [
{
"chunk_sizes": [
[
128,
128,
64
]
],
"encoding": "raw",
"key": "1866_1866_10000",
"resolution": [
1866,
1866,
10000
],
"size": [
7204,
8706,
599
],
"voxel_offset": [
0,
0,
0
]
},
{
"chunk_sizes": [
[
128,
128,
64
]
],
"encoding": "raw",
"key": "3732_3732_10000",
"resolution": [
3732,
3732,
10000
],
"size": [
3602,
4353,
599
],
"voxel_offset": [
0,
0,
0
]
},
{
"chunk_sizes": [
[
128,
128,
64
]
],
"encoding": "raw",
"key": "7464_7464_10000",
"resolution": [
7464,
7464,
10000
],
"size": [
1801,
2177,
599
],
"voxel_offset": [
0,
0,
0
]
}
],
"type": "image"
}
Traceback:
Mip: 1
Chunk size: [128, 128, 64]
Downsample factors: [2, 2, 1]
Volume Bounds: Bbox([0, 0, 0],[3602, 4353, 599], dtype=int32)
Selected ROI: Bbox([0, 0, 0],[3602, 4353, 599], dtype=int32)
Tasks: 9%|████████▎ | 255/2700 [00:19<03:09, 12.93it/s]
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/pathos/helpers/mp_helper.py", line 15, in <lambda>
func = lambda args: f(*args)
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/taskqueue/taskqueue.py", line 501, in _task_execute
task.execute(*args, **kwargs)
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/taskqueue/queueablefns.py", line 78, in execute
self(*args, **kwargs)
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/taskqueue/queueablefns.py", line 87, in __call__
return self.tofunc()()
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/igneous/tasks/image.py", line 456, in DownsampleTask
return TransferTask(
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/igneous/tasks/image.py", line 433, in TransferTask
downsample_and_upload(
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/igneous/tasks/image.py", line 62, in downsample_and_upload
mips = tinybrain.downsample_with_averaging(
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/tinybrain/downsample.py", line 52, in downsample_with_averaging
return tinybrain.accelerated.average_pooling_2x2(img, num_mips)
File "tinybrain/accelerated.pyx", line 80, in tinybrain.accelerated.average_pooling_2x2
ValueError: Can't downsample smaller than the smallest XY plane dimension.
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "make_precomputed_corrected.py", line 175, in <module>
tq.execute()
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/taskqueue/taskqueue.py", line 477, in execute
for _ in executor.imap(_task_execute, self.queue):
File "/usr/people/ahoag/.conda/envs/precomputed/lib/python3.8/site-packages/multiprocess/pool.py", line 868, in next
raise value
ValueError: Can't downsample smaller than the smallest XY plane dimension.
Yea the next mip level will have a strip of a single voxel width at the edge. I'll need to figure out some more robust way of handling this edge case. Adjusting the size of the base image is probably your best bet for now to make the trailing edge thicker.
Why does that happen? The x and y dimensions are even in the base image (7204,8706). In what dimension do I need to pad?
It has to do with the chunk size. The last chunk on the y-axis is 8706 % 128 = 2
so after one halving it is only a single voxel. If you expand the size to 8704 + 128 = 8832
you should have no problems.
As a general solution, when it gets into that state, I could probably either trim the trailing edge on the next mip or duplicate it depending on what makes sense. I'll have to think about this some more. Generally I err towards retaining the edge rather than clipping it to avoid data loss.
Oh I see thanks for explaining that. In that case I can also change the chunk size so that I won't run into this problem for the number of mips I want. I'll close this since you have solved my issue, but feel free to keep it open for your internal use.
I have a precomputed image layer with a single mip layer that I am trying to downsample to multiple mip levels. Here is what the info file looks like:
You'll notice that it has two entries in the
"scales"
key. The second is from trying to create a mip=1 level via downsampling, which is when the error occurs. The code I am using to downsample is the following:The code gets about 10% done on the first iteration in the for loop and then returns this error:
I have been using this code for many months now, and I have never encountered this error. The dataset from which the cloudvolume was made is not corrupted. Some background: I made a cloudvolume from a TIF stack with chunk size
[1024,1024,1]
at first and then I rechunked it usingigneous.task_creation.create_transfer_tasks()
so that the downsamples could be made more isotropically. The info file at the top here is from the rechunked layer.I am using
python setup.py develop
install method for igneous. I just pulled the repo and retried using the latest code and got the same error.Thanks, Austin