gotjd709 / mrn

0 stars 1 forks source link

different transformations for different mmp input ? #1

Open zhujingsong opened 2 years ago

zhujingsong commented 2 years ago

I am wondering whether different transformations is made for different mmp input ?

sample1 = self.augmentation(image=batch_x[j][0], mask=batch_y[j]) sample2 = self.augmentation(image=batch_x[j][1])

Since I think for every call of self.augmentation(), threre might be a totally different transforms.

PLUS: Could you please share the metrics when you run the MRN?

gotjd709 commented 2 years ago

I overlooked your point. After I recognize my fault, I updated this part like

datagen.py sample = self.augmentation(image=batch_x[j][0], image1=batch_x[j][1], mask=batch_y[j])

In HookNet code, I applied the correct way. Thanks for pointing out.

And I only considered one loss that calculates batch_x[j][0] with ground truth. So, I use the metric in segmentation_models.pytorch.

My personal implementation code is customized for my task, so if you want to use it, you'll have to modify it. I hope my code will help you.