AllenCellModeling / pytorch_fnet

Three dimensional cross-modal image inference
Other
151 stars 61 forks source link

torch.Size([1, 7, 624, 924]) incompatible with patch_shape [32, 64, 64] #172

Open michalstepniewski opened 4 years ago

michalstepniewski commented 4 years ago

ValueError: Dataset item 13, component 0 shape torch.Size([1, 7, 624, 924]) incompatible with patch_shape [32, 64, 64]

Hello, I am trying to train Your network on the tom20 mitochondrium stained dataset provided by you on the webpage in czi files. I took only the slice for dimensions with shape 1 and transposed rmaining dimensions image_proper = image[0,0,:,:,:,:,0] image_proper_transposed = image_proper.transpose(1, 0, 2, 3) so that the dimensionality of the file matches the dimensionality of the demonstration dataset: i.e. in that case (65, 7, 624, 924). What am I doing wrong? What would You recommend?

Here is the traceback: INFO:fnet.cli.train_model: History loaded from: /newvolume/pytorch_fnet/examples/model/losses.csv Buffering images: 0%| | 0/16 [00:00<?, ?it/s]INFO:fnet.data.bufferedpatchdataset: Added item 13 into buffer

Traceback (most recent call last): File "/home/ubuntu/anaconda3/bin//fnet", line 11, in sys.exit(main()) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/fnet/cli/main.py", line 41, in main func(args) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/fnet/cli/train_model.py", line 131, in main bpds_train = get_bpds_train(args) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/fnet/cli/train_model.py", line 64, in get_bpds_train return BufferedPatchDataset(dataset=ds, **args.bpds_kwargs) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/fnet/data/bufferedpatchdataset.py", line 54, in init self.insert_new_element_into_buffer() File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/fnet/data/bufferedpatchdataset.py", line 110, in insert_new_element_into_buffer self._check_last_datum() File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/fnet/data/bufferedpatchdataset.py", line 86, in _check_last_datum f"Dataset item {idx_buf}, component {idx_c} shape " ValueError: Dataset item 13, component 0 shape torch.Size([1, 7, 624, 924]) incompatible with patch_shape [32, 64, 64]

fcollman commented 4 years ago

We are a little confused about exactly what you are running, and with what files? could you be more explicit about that? What's odd is the dimension 7 in the dataset size... our z stacks have many more sections than that.. which makes me think that maybe that's channels, and you are loading in more channels than you need. One funny aspect of the raw data is that to avoid some problems with filter timing there were 'blank' channels in the dataset, so all 'channels' don't actually have data. I think there were 7 raw channels.... but our dataloader classes took care of this.

michalstepniewski commented 4 years ago

Thank You very much for swift reply, Ok. So my starting point is an example in file pytorch_fnet/examples/download_and_train.py . Until now I have been successful in running the example code provided 'as is'. I was also successful in changing the target channel to 1 (so the cell membranes). That is in training. The tiff files downloaded by the example code have 7 channels and that does not seem to be an issue. In the next step I downloaded the tom20 training dataset from your website. The files are in .czi format. I have loaded them in using library: czifile, which I believe is written by your team. I then took the array, took 4 dimensional slice: channel_no and x,y,z (there was only one slice) and transposed the array so that the coordinate axes are in the same order as they were in the tiff files downloaded by the example. Then I put them in the 'fov' folder and commented out some lines in the example code so it takes the generated .tiff files as inut. When I ran the code I got the above error. Yours sincerely, Michal Stepniewski

alxndrkalinin commented 4 years ago

Seems like there might be a requirement for the minimum number of z slices