Closed STORMTROOPERRR closed 10 months ago
Hi, you can resize both the images and the masks before the training, or before the other augmentations. This could fix both of the errors. Thanks for asking!
Hi, you can resize both the images and the masks before the training, or before the other augmentations. This could fix both of the errors. Thanks for asking!
Thanks for your kind reply, and I figured it out exactly this way. However when I tried to reproduce your MTBIT performance, I found that the training process was quite fluctrated especially the rmse and crmse metrics on validation subset, with the test rmse above 2.0, which was far worse than it should be. Could you be very kind to give me some insights about it? Many thanks.
No.1 when load data and apply augmentation, the script will trigger an error in albumentations which indicates that the input images and masks don't share the same size; No.2 as described in the proposed article, the 400x400 images which be resized to 256 for training, but in your codes the data transported to network is still 400x400, also this affects the computation of 3d loss, where the size of label is 200 and don't match the output of the network. I wonder if you could help me out? Many thanks.