Apparently, there is a bug in the algorithm behind the function generate_batch_from_files in the file dataloader.py in the 2D directory. It does not read all slices of some files.
Below you see some printout from inside the function generate_batch_from_files for the first few iterations for a batch_size of 320 and num_slices_per_scan of 155.
For this setup, each time that a queue of images is created, three volumes of 155 slices are read resulting in a stack of 465 slices which is larger than the batch size. The stack should be cropped to 320 slices to match the batch size.
When idy=0, the stack of images is cropped from the end, meaning the first 320 slices are kept and the rest are thrown away. When idy=30, the stack of images is cropped from the beginning, meaning that the last 320 slices are kept and the rest are thrown away. Each time a new set of 3 files is read and the leftover from the last step is not used anymore.
If this repository is still active, I would really appreciate your reply.
Apparently, there is a bug in the algorithm behind the function
generate_batch_from_files
in the filedataloader.py
in the 2D directory. It does not read all slices of some files. Below you see some printout from inside the functiongenerate_batch_from_files
for the first few iterations for abatch_size
of 320 andnum_slices_per_scan
of 155. For this setup, each time that a queue of images is created, three volumes of 155 slices are read resulting in a stack of 465 slices which is larger than the batch size. The stack should be cropped to 320 slices to match the batch size. When idy=0, the stack of images is cropped from the end, meaning the first 320 slices are kept and the rest are thrown away. When idy=30, the stack of images is cropped from the beginning, meaning that the last 320 slices are kept and the rest are thrown away. Each time a new set of 3 files is read and the leftover from the last step is not used anymore.If this repository is still active, I would really appreciate your reply.