Open lfxuan opened 4 years ago
Hi! Can you please provide more context on what you have tried? Did you train your own spatial and appearance transform models on your heart data, or are you using our pre-trained models?
On Mon, Jul 6, 2020 at 1:49 AM lfxuan notifications@github.com wrote:
Thank you very much for your work! I am working on the generation of 2D heart slices. I plan to use your code "--sas" to generate data, but when visualizing the data, I found that the shape of the data is the shape of the source data. Could you tell me which step is set wrong?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/xamyzhao/brainstorm/issues/25, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA75VDK6TN76WBMGM3OSOR3R2GFZZANCNFSM4ORMJF6Q .
Sorry, did not reply in time. (1) I have trained the spatial and appearance transform models using the heart data set in advance, and then called the flow-fwd model to generate new data. (2) In order to meet my input requirements, the code was modified to be suitable for images of size (256, 256, 3); (3) In order to generate diverse data, I modify the sample function "_create_augmented_examples" in the "--sas" type to use next() to randomly select different source data. (4) After the above steps, the generated X_aug and Y_aug are saved in the function "_create_augmented_examples", but when visualized, it is found that the generated data is all source data.This makes me very confused.
Thanks for the clarification. In (3), you used the generator to select random source examples? In L444-447, X_aug is created by warping the source image to randomly sampled unlabeled images. Can you post a snippet of the changes you made in _create_augmented_examples
?
This is my change code: `def _create_augmented_examples(self, source_gen):
if self.aug_sas:
aug_name = 'SAS'
# just label a bunch of examples using our SAS model, and then append them to the training set
if self.X_labeled_train.shape[0] == 1:
source_X = self.X_labeled_train
source_Y = self.segs_labeled_train
else:
print('multiple atlas')
unlabeled_labeler_gen = self.dataset.gen_vols_batch(
dataset_splits=['unlabeled_train'],
batch_size=1, randomize=False, return_ids=True)
X_train_aug = np.zeros((self.n_aug,) + self.aug_img_shape)
Y_train_aug = np.zeros((self.n_aug,) + self.aug_img_shape)
ids_train_aug = [] # ['sas_aug_{}'.format(i) for i in range(self.n_aug)]
for i in range(20): # Control the amount of data generated
if not self.X_labeled_train.shape[0] == 1:
source_X, source_Y, _, _ = next(source_gen)
self.logger.debug('Pseudo-labeling UL example {} of {} using SAS!'.format(i, self.n_aug))
unlabeled_X, _, _, ul_ids = next(unlabeled_labeler_gen)
# warp labeled example to unlabeled example
X_aug, flow = self.flow_aug_model.predict([source_X, unlabeled_X])
# warp labeled segs similarly
Y_aug = self.seg_warp_model.predict([source_Y, flow])
#save the X_aug and Y_aug
...
X_train_aug[i] = unlabeled_X # Why not is X_aug?
Y_train_aug[i] = Y_aug
ids_train_aug += ['sas_{}'.format(ul_id) for ul_id in ul_ids]
Thanks for the code -- that seems okay to me. Have you checked that each unlabeled_X is a differently-shaped example? Also, does your flow_aug_model work if you give it two different examples as input?
I'm sure unlabeled_X is a differently-shaped example, and my flow_aug_model work, but there is no effect mentioned in the paper, which is the same as using the'--aug_rand' effect in the code. I have sent you an email with my labeled/unlabeled/generated data as an attachment.
I haven't received your email yet, but I can try to repro the issue with my data. What version of tensorflow and keras are you using?
When I run the segmenter with --aug_rand
, I am seeing randomly warped examples. What do you mean by "there is no effect mentioned in the paper"? Are your augmented examples identical to the source example?
Sorry, I am not clear. What I want to express is that I didn’t know which part of the paper reproduced the error, and then caused the data generated by "sas" and "rand" to have the same effect, and failed to achieve the deformation effect of the flow-fwd model called "sas" in the paper. My running environment is tensorflow-gpu1.9.0, keras2.1.6. I have re-sent an email about the data to the email address specified in your paper.
Got your email. I agree that the results of --aug_sas
look strange. For each transformed example, can you also visualize the source and target images? That will help us figure out if the network is attempting to warp the source to the target.
I made three sets of source data, target data and corresponding transformed data visualization, and sent the relevant data to your mailbox.
I got your email. I agree that the generated examples do not look like the target examples at all. It seems like you might be loading the wrong model -- perhaps check that _create_augmentation_models()
is loading the right one? I've stepped through the code in the repo and it seems to be working as expected.
HI,I also encountered some problems when training with my own data set. Could you please show me your code to my email? Thanks!
Hello! My GPU does not have 12 GB of memory, GPU memory is a little small, I can run through the ''color-unet'' model, but I can not run through the ''flow-fwd'' and ''flow-bck'' models. Which part of the code should I modify to make sure I can run the program?
@jelly571 you'll probably need to modify the input volume size (to the model), either by using a different dataset or by cropping/resizing each example.
Thank you very much for your work! I am working on the generation of 2D heart slices. I plan to use your code "--sas" to generate data, but when visualizing the data, I found that the shape of the data is the shape of the source data. Could you tell me which step is set wrong?