AllenCellModeling / pytorch_fnet

Three dimensional cross-modal image inference
Other
151 stars 61 forks source link

Specify dimension requirement #105

Open jxchen01 opened 5 years ago

jxchen01 commented 5 years ago

It looks like there is a requirement on the minimum number of z slices.

"ValueError: Input array must be at least length 32 in first dimension"

When batch-processing on a large set of images, it would be great to skip the "bad" image and raise a flag, rather than assertion fail.

ateneyck1 commented 2 years ago

Do we know why there is a requirement for a minimum number of z stacks to be 32? In the nature methods paper, I do not see why 32 z stacks were required.

jxchen01 commented 2 years ago

Do we know why there is a requirement for a minimum number of z stacks to be 32? In the nature methods paper, I do not see why 32 z stacks were required.

Dear @ateneyck1 , z >= 32 is required by the model. Namely, after certain number of pooling layers, if the number of z is too small, a numerical error will occur.

ateneyck1 commented 2 years ago

@jxchen01 do you know where the pooling layers are defined?

jxchen01 commented 2 years ago

@jxchen01 do you know where the pooling layers are defined?

Actually, it is convolution with stride = 2. See: https://github.com/AllenCellModeling/pytorch_fnet/blob/master/fnet/nn_modules/fnet_nn_3d_params.py#L51

The number of such layers is defined by depth: https://github.com/AllenCellModeling/pytorch_fnet/blob/master/fnet/nn_modules/fnet_nn_3d_params.py#L7