lfz / DSB2017

The solution of team 'grt123' in DSB2017
MIT License
1.24k stars 418 forks source link

training a nodule detector error #11

Closed q5390498 closed 6 years ago

q5390498 commented 7 years ago

I only use .mhd to train without DSB3 data,but in step 2,when I run ./run_training.sh,it is output some error.

Traceback (most recent call last): File "main.py", line 350, in main() File "main.py", line 168, in main train(train_loader, net, loss, epoch, optimizer, get_lr, args.save_freq, save_dir) File "main.py", line 180, in train for i, (data, target, coord) in enumerate(data_loader): File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 212, in next return self._process_next_batch(batch) File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 239, in _process_next_batch raise batch.exc_type(batch.exc_msg) AssertionError: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 41, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "DSB2017-master/training/detector/data.py", line 103, in getitem label = self.label_mapping(sample.shape[1:], target, bboxes) File "DSB2017-master/training/detector/data.py", line 281, in call assert(input_size[i] % stride == 0) AssertionError

In README.md, I see that the input size is 128x128x128,but the input_size(in DSB2017-master/training/detector/data.py", line 281 ) of my data is not all 128,there are some values is 144,127,75 and so on. Is the input_size must be 128? And I did not find the code to precess the data to 128128128,where is it?Thank you very much...

lfz commented 7 years ago

the crop size is defined in the res18.py

please provide more info?

please try to debug this step

    crop = imgs[:,
        max(start[0],0):min(start[0] + crop_size[0],imgs.shape[1]),
        max(start[1],0):min(start[1] + crop_size[1],imgs.shape[2]),
        max(start[2],0):min(start[2] + crop_size[2],imgs.shape[3])]

line 220 in training/detector/data.py

https://github.com/lfz/DSB2017/blob/master/training/detector/data.py/#L220

q5390498 commented 7 years ago

thank you for your reply.But, line 68 in detector/data.py, after call the function Crop(), self.crop has different shapes,not all (1,128,128,128), is this normal?

lfz commented 7 years ago

It's abnormal

XavierLinNow commented 7 years ago

hi, @q5390498 Have you figured it out. I have same question.

q5390498 commented 7 years ago

@XavierLinNow No, I have given up.

Kongsea commented 7 years ago

I have encountered the same problem as you. However, when I train the model using the original data without lung segmentation and the other preprocessing steps, the error has gone... Could anyone give some advice?

wentaozhu commented 7 years ago

I think we should revise code to make the size be the crop_size . I do not think crop = imgs[:, max(start[0],0):min(start[0] + crop_size[0],imgs.shape[1]), max(start[1],0):min(start[1] + crop_size[1],imgs.shape[2]), max(start[2],0):min(start[2] + crop_size[2],imgs.shape[3])] will make the cropped image be of shape (128,128,128). @lfz

wentaozhu commented 7 years ago

Here is the start and the img shape [1519, -288, -109] (1, 290, 190, 285) [-385, -213, -193] (1, 288, 211, 314) [212, 32, -134] (1, 272, 187, 244) [-492, -169, -206] (1, 268, 189, 282) [131, 73, 62] (1, 245, 165, 148) [83, 165, 151] (1, 295, 228, 315) [333, -78, -23] (1, 281, 194, 282) [-53, -160, -54] (1, 250, 201, 286) [-393, -286, -16] (1, 292, 179, 279) [-289, -250, -152] (1, 280, 247, 319) [210, -85, -77] (1, 312, 216, 280) [157, 122, 159] (1, 276, 191, 274) [-476, -75, -94] (1, 286, 213, 279) [-435, -176, -81] (1, 275, 245, 287) [727, -105, -277] (1, 275, 193, 284) [-331, -169, -176] (1, 261, 226, 291) [124, 64, 64] (1, 235, 177, 217) [2656, -127, -104] (1, 257, 233, 341) [185, 133, 122] (1, 272, 199, 276) [118, 118, 193] (1, 238, 189, 290) [58, -126, -27] (1, 283, 184, 283) [-1, -232, -78] (1, 257, 203, 284) [686, -223, -155] (1, 212, 181, 239) [-539, 36, -125] (1, 248, 194, 282) [98, -95, -39] (1, 295, 228, 315) [-711, -94, -24] (1, 283, 198, 270) [130, -73, -191] (1, 291, 237, 288) [848, -12, -100] (1, 215, 201, 261) [269, -8, -161] (1, 203, 186, 267) [185, 88, 140] (1, 273, 173, 258) [95, 107, 95] (1, 257, 200, 256) [-305, -294, -2] (1, 255, 221, 318) [1042, -103, -165] (1, 286, 196, 293) [-151, -114, -26] (1, 313, 233, 343) [76, 103, 209] (1, 285, 253, 341) [135, 137, 202] (1, 305, 240, 321) [-469, -163, -140] (1, 275, 216, 306) [112, -169, -143] (1, 229, 173, 277) [131, 97, 82] (1, 213, 197, 262) [111, 72, 81] (1, 218, 199, 224) [1111, -277, -116] (1, 236, 183, 261) [109, 120, 117] (1, 294, 195, 271) [126, 102, 156] (1, 217, 183, 298) [510, 1, -40] (1, 308, 183, 265) [203, 85, 177] (1, 282, 175, 268) [-165, -110, -66] (1, 293, 228, 320) [-403, -106, -124] (1, 269, 191, 288)

lfz commented 7 years ago

Hi, since I did not work this project now, I can not reproduce this problem on my dataset. the start should be constrained

plz provide more info?

q1nwu commented 6 years ago

I also encountered this problem. So I try to debug crop = imgs[:, max(start[0],0):min(start[0] + crop_size[0],imgs.shape[1]), max(start[1],0):min(start[1] + crop_size[1],imgs.shape[2]), max(start[2],0):min(start[2] + crop_size[2],imgs.shape[3])]. I found that the crop.size is sure to be 128 128 128 if the left and right lungs were successfully extracted by function step1_python.