Thank you for your contribution. I would like to ask you whether this model will take a long time and a lot of memory during training. I trained a batch size of 1 on a single 1080ti gpu, but one epoch takes four Hours, even in the second epoch, the wrong out of memory problem was reported, and the link of the T1 to T2 pre-training model of BRATS you provided seems to be the T1 to PD pre-training model of IXI. May I ask these two pre-training models? Is the training model the same? I would be very grateful if I could get your reply.
Thank you for your contribution. I would like to ask you whether this model will take a long time and a lot of memory during training. I trained a batch size of 1 on a single 1080ti gpu, but one epoch takes four Hours, even in the second epoch, the wrong out of memory problem was reported, and the link of the T1 to T2 pre-training model of BRATS you provided seems to be the T1 to PD pre-training model of IXI. May I ask these two pre-training models? Is the training model the same? I would be very grateful if I could get your reply.