Open a15082328042 opened 3 years ago
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention.
For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/.
The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing.
' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed.
Best,
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention.
For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/.
The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing.
' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed.
Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two.
If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two.
If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.)
I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.)
I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Thank you very much. I could have found my problem. I only use 40 LGE unlabeled data for training, I’ll try again.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.) I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Thank you very much. I could have found my problem. I only use 40 LGE unlabeled data for training, I’ll try again.
Can you get the same result with paper now?
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.) I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Thank you very much. I could have found my problem. I only use 40 LGE unlabeled data for training, I’ll try again.
Can you get the same result with paper now?
I can't achieve that result, but only a little bit lower.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.)
I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Now I also have a little doubt, when I change the image size to 224, the result will be very poor. This is totally unreasonable, I hope to get your reply. Thank you very much!
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.) I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Now I also have a little doubt, when I change the image size to 224, the result will be very poor. This is totally unreasonable, I hope to get your reply. Thank you very much!
Hope these explanations be helpfulfor your further research.
Thank you for your reply, which give me a deeper understanding of the paper. I have tried to increase the batch, but the effect is not very obvious. Thank you again, your work has helped me a lot.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.) I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Now I also have a little doubt, when I change the image size to 224, the result will be very poor. This is totally unreasonable, I hope to get your reply. Thank you very much!
- The most challenging problem for explicit metrics of domain discrepancy (not only the proposed regularization term in this project), I think, is its optimization. We need to estimate the metrics with estimator, using mini-batch samples. For segmentation, the latent feature is in large dimension, the estimator would be of high variance. The general solutions to reduce the variance are using larger number of samples in a mini-batch, or reduce the dimensionality of the features, which are both bad choices for segmentation model training. w.r.t your case, using larger input size would lead to larger dimensionality of latent features, and larger variance of the regularization term, which is harmful to optimization of the loss, or model. Moreover, as discussed in the paper, I used their marginal distribution to calculate this metric, instead of using the joint distribution, which further aggravated this issue. So it is not a good choice to enlarge the input size, unless you can use larger number of data in a mini-batch for training, and more diverse data to cover the distribution. From my experience, this is the general weakness in this kind of method. So, in my view, If you are interested in this direction, I think it is a good research direction, and solve this problem would be important to improve the performance.
- When changing the input size, please also fine-tune the weighting parameters of the regularization loss in the code, because I did not normalize it in the code.
Hope these explanations be helpfulfor your further research.
Hello, Can you provide a code about data preprocessing(roi and intensity normalized)?
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.
Hi, thank you for your attention. For reproducing, if you do not use the additional information of slice position, please use the code in the ~/code/general, which is re-edited, or use the code in the ~/code/original, which is the original code used by me. If you have the additional information of slice position, you can use the code in ~/code/add_position_info/. The only requirement for this reproduction is to generate the input images. You should crop all the images into 192*192 around the center of the heart (I guess you used the MS-CMR dataset). Please re-download these codes for testing. ' feel that there is a big difference between the paper and the code you submitted'-------the code in ~/code/general or ~/code/add_position_info/ is consistent with the paper. If you have any question, any discussion is welcomed. Best,
thank you for your reply. Follow your readme, I use the first five columns of the LGE dataset as the validation, and the rest are used for the target data. Besides, I have experimented on both parts of the code, but the result is always different from the paper. The data of the paper is MYO: 73.03±8.316, LV: 88.06±4.832, RV: 78.47±14.86. Except for RV, I don't not reach close values for the other two. If it’s because of a problem with my data, please point it out. I would be very grateful if you can give me your training dataset style.
Do you use the dataset in the ~/Dataset/ file I have uploaded in this github? (the 45 LGE unlabeled data were used for training, and you have to further split them into 5 and 40 images for validation and test, respectively, which corresponds to the name of dataset_dir+'/LGE_Test/' and dataset_dir+'/LGE_Vali/' file.) I have just uploaded the original code I used for training in the './code/original' and './code/original_pos_info' files. If you can not obtain the results, maybe you can just try these codes. I just ran them in my server, and the results were close to those in the paper~.
Now I also have a little doubt, when I change the image size to 224, the result will be very poor. This is totally unreasonable, I hope to get your reply. Thank you very much!
1. The most challenging problem for explicit metrics of domain discrepancy (not only the proposed regularization term in this project), I think, is its optimization. We need to estimate the metrics with estimator, using mini-batch samples. For segmentation, the latent feature is in large dimension, the estimator would be of high variance. The general solutions to reduce the variance are using larger number of samples in a mini-batch, or reduce the dimensionality of the features, which are both bad choices for segmentation model training. w.r.t your case, using larger input size would lead to larger dimensionality of latent features, and larger variance of the regularization term, which is harmful to optimization of the loss, or model. Moreover, as discussed in the paper, I used their marginal distribution to calculate this metric, instead of using the joint distribution, which further aggravated this issue. So it is not a good choice to enlarge the input size, unless you can use larger number of data in a mini-batch for training, and more diverse data to cover the distribution. From my experience, this is the general weakness in this kind of method. So, in my view, If you are interested in this direction, I think it is a good research direction, and solve this problem would be important to improve the performance. 2. When changing the input size, please also fine-tune the weighting parameters of the regularization loss in the code, because I did not normalize it in the code.
Hope these explanations be helpfulfor your further research.
I want to get the code about data processing too.Can you share it?
I want to cite and compare with your paper. I also want the code for data processing. Because the performance on MMWHS is poor using the released data by SIFA and the the processed data by our method.
I want to cite and compare with your paper. I also want the code for data processing. Because the performance on MMWHS is poor using the released data by SIFA and the the processed data by our method.
Have you solved it?
I want to cite and compare with your paper. I also want the code for data processing. Because the performance on MMWHS is poor using the released data by SIFA and the the processed data by our method.
Have you solved it?
No, I tried a lot of normalization methods, but I couldn’t achieve that effect.
No, I tried many preprocessing methods and papameters. Finally, I give up.
Can you share the MM-WHS dataset?
Thank you for your work. When I use code for training, I can't reach the result of the paper. Can you give more detailed parameters? And, I feel that there is a big difference between the paper and the code you submitted.