xamyzhao / brainstorm

Implementation of "Data augmentation using learned transforms for one-shot medical image segmentation"
MIT License
392 stars 91 forks source link

data storage #27

Open cue1997 opened 3 years ago

cue1997 commented 3 years ago

Thank you very much for your experiments and new ideas. So far, I have successfully run your code and successfully trained to get the output image, but I want to use the newly generated image. How do I save the output image separately? Instead of just saving the long picture in figures

xamyzhao commented 3 years ago

Hi, thanks for your interest! You'll likely have to modify the _make_results_im function in either transform_models.py (https://github.com/xamyzhao/brainstorm/blob/master/src/transform_models.py#L620) or segmenter_model.py (https://github.com/xamyzhao/brainstorm/blob/master/src/segmenter_model.py#L777).

cue1997 commented 3 years ago

When I was training my stroke data set, the data input size was (8, 256, 256), and I also encountered the same problem as another person: when I trained transform color and --aug_sas, the code pointed to you before The trained model, and will report an error: ValueError: Error when checking input: expected input_1 to have shape (160, 192, 224, 1) but got array with shape (8, 256, 256, 1), the problem appears in transform_model Line 541, and check that the X_target and X_source format sizes are both (8, 256, 256, 1). How can I solve this problem?

xamyzhao commented 3 years ago

@cue1997 if you search for the term "160, 192, 224" in this repository, you can see the places where this input size is defined: https://github.com/xamyzhao/brainstorm/search?q=160%2C+192%2C+224. You'll likely want to update all of these to reflect the sizes in your dataset.

cue1997 commented 3 years ago

I'm sorry I didn't explain my situation clearly. I have modified all (160, 192, 224) places, and can train flow-fwd, flow-bkd and mri-100unlabeled --aug_rand. But when training color-unet and --aug_sas, the error mentioned above will be reported. In addition, in line 134 of mri_data, I added curr_contours =256 and curr_segs =256, otherwise the program will report an error: File "F:\brainstorm-master1\src\mri_loader.py", line 134, in load_dataset_files vols[i], curr_segs, curr_contours = data ValueError: could not broadcast input array from shape (160,192,224,1) into shape (8,256,256,1)

xamyzhao commented 3 years ago

Are you sure you are loading your own trained model? Your first error sounds like your model is still expecting the old input shape, which implies that you are loading the pretrained model.

cue1997 commented 3 years ago

Hello, thank you very much for your previous guidance, I am very sorry for not responding to you in time. My data is ready for training, but the training effect of the stroke data set I used is very poor. Have you tried stroke data training before? I put ct/cbf/cbv/mtt image in vol and mask in seg. Is there a problem with my file placement method?In addition, in line 134 in the mri_data code, I still need to add code curr_contours =256 curr_segs =256 I can train my code (8, 256, 256), even though I have deleted all the files in the model folder

xamyzhao commented 3 years ago

Can you elaborate on what you mean by the "training effect is very poor"? Does this mean that you are able to train, but you observe that the registration (spatial transformation) performance is bad?

I would expect the registration performance to be worse on a 2D dataset than on a 3D dataset, since this method does not explicitly handle occlusions.

cue1997 commented 3 years ago

Yes, the results of the training pictures are not good. Could you give me your email, I can package my code and data and send it to you