Open wangchuan199803 opened 11 months ago
Sorry, I defaulted to "context=True" in this project, so I removed the "if context" condition check in the network part. The code has been modified. Thanks for the correction.
Hello, I see that this model obtains denoising results based on noisy images during training, but during testing, the denoising results are obtained in t-1 steps. Is your understanding correct? There is also which function in the paper the sample in the diffusion_modules.py file corresponds to. Looking forward to your reply, thank you.
---Original--- From: @.> Date: Tue, Nov 28, 2023 20:50 PM To: @.>; Cc: @.**@.>; Subject: Re: [qgao21/CoreDiff] if context is False, the code needs change(Issue #4)
Sorry, I defaulted to "context=True" in this project, so I removed the "if context" condition check in the network part. The code has been modified. Thanks for the correction.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hello, I see that this model obtains denoising results based on noisy images in 1 step during training, but during testing, the denoising results are obtained in t-1 steps. Is me understanding correct? There is also which function in the paper the sample in the diffusion_modules.py file corresponds to. Looking forward to your reply, thank you.
---Original--- From: @.> Date: Tue, Nov 28, 2023 20:50 PM To: @.>; Cc: @.**@.>; Subject: Re: [qgao21/CoreDiff] if context is False, the code needs change(Issue #4)
Sorry, I defaulted to "context=True" in this project, so I removed the "if context" condition check in the network part. The code has been modified. Thanks for the correction.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Your understanding is correct. In the training phase, we train a conditional network for one-step prediction. Since the potential oversmoothing of the one-step prediction, we use the multi-step sampling algorithm proposed by Cold Diffusion during the inference phase to retain more details.
Well, that's great, thank you for your reply. Another problem is that I only use the Mayo data set and multiply the loss by 100 because it is too small. Then I found that the average PSNR was only 19 at 19,000 iterations. Did I not train for enough time or the amount of data was insufficient. By the way, looking at your name, I feel that you are Chinese. Hahaha
信息1601王振川 @.***
------------------ 原始邮件 ------------------ 发件人: "qgao21/CoreDiff" @.>; 发送时间: 2023年11月29日(星期三) 晚上6:13 @.>; @.**@.>; 主题: Re: [qgao21/CoreDiff] if context is False, the code needs change (Issue #4)
Your understanding is correct. In the training phase, we train a conditional network for one-step prediction. Since the potential oversmoothing of the one-step prediction, we use the multi-step sampling algorithm proposed by Cold Diffusion during the inference phase to retain more details.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Yes, I am a Chinese PhD student at Fudan University. There is no necessity to enlarge the training loss. Once the training process stabilizes, the loss value hovers around 1e-5. For training, I utilized over 4,000 1mm images from nine patients, ranging from L067 to L333.
老哥哈哈,我还以为是外国的所以写的英文哈哈。 我是电科大这边的。 我看损失值很早就到了1e-5左右,但是不同批次的损失值差距很大。 数据集大小也差不多,我把那个100倍去掉试试。 谢谢啦
信息1601王振川 @.***
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2023年11月29日(星期三) 晚上8:18 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [qgao21/CoreDiff] if context is False, the code needs change (Issue #4)
是的,我是复旦大学的中国博士生。没有必要扩大训练损失。一旦训练过程稳定下来,损失值就会徘徊在 1e-5 左右。在训练中,我使用了来自4名患者的000,1多张067mm图像,范围从L333到L<>。
- 直接回复此电子邮件,在 GitHub 上查看或取消订阅。 您收到此消息是因为您创作了该线程。Message ID: @.***>
客气客气,稳定在1e-5后还需要训练较长一段时间,有问题欢迎随时讨论
嗯呢嘿嘿
信息1601王振川 @.***
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2023年11月29日(星期三) 晚上8:25 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [qgao21/CoreDiff] if context is False, the code needs change (Issue #4)
客气客气,稳定在1e-5后还需要训练较长一段时间,有问题欢迎随时讨论
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
怎么老哥你计算的指标那么高呀
信息1601王振川 @.***
------------------ 原始邮件 ------------------ 发件人: "qgao21/CoreDiff" @.>; 发送时间: 2023年11月29日(星期三) 晚上8:25 @.>; @.**@.>; 主题: Re: [qgao21/CoreDiff] if context is False, the code needs change (Issue #4)
客气客气,稳定在1e-5后还需要训练较长一段时间,有问题欢迎随时讨论
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
如果要复现与论文相似的结果,应该使用本项目提供的指标计算代码,不同的计算代码可能存在微小的差距。计算指标使用的CT窗为[-1000, 1000]HU,不同的计算窗对结果有较大影响。我更新了预处理Mayo 2016原始DICOM数据的代码和训练demo,同样也更新了我训练时的Loss曲线和在测试集上的指标曲线,你可以对照检查一下。
If you want to reproduce experiment results similar to those in the paper, you should use the metric calculation code provided by this project. There may be slight differences in different calculation codes. The CT window used to calculate the metrics is [-1000, 1000]HU. Different calculation windows have a greater impact on the results. I updated the code and training demo for preprocessing the Mayo 2016 original DICOM data. I also updated the training loss curve and the evaluation metric curve. You can check your code against them.
哥,再烦你一下哈哈。corefiff模型训练好之后,测试的时候,采样过程是按照冷扩散模型来的。但是我发现, 相比多步采样,直接一次把图像输入到模型里得到的结果更好,也就是说这个模型就变成了一个unet去噪网络。。
---Original--- From: @.> Date: Tue, Dec 5, 2023 11:48 AM To: @.>; Cc: @.**@.>; Subject: Re: [qgao21/CoreDiff] if context is False, the code needs change(Issue #4)
如果要复现与论文相似的结果,应该使用本项目提供的指标计算代码,不同的计算代码可能存在微小的差距。计算指标使用的CT窗为[-1000, 1000]HU,不同的计算窗对结果有较大影响。我更新了预处理Mayo 2016原始DICOM数据的代码和训练demo,同样也更新了我训练时的Loss曲线和在测试集上的指标曲线,你可以对照检查一下。
If you want to reproduce experiment results similar to those in the paper, you should use the metric calculation code provided by this project. There may be slight differences in different calculation codes. The CT window used to calculate the metrics is [-1000, 1000]HU. Different calculation windows have a greater impact on the results. I updated the code and training demo for preprocessing the Mayo 2016 original DICOM data. I also updated the training loss curve and the evaluation metric curve. You can check your code against them.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
出现这个现象有两种可能的原因:
嗯嗯,谢谢哥,我回去查看下图像质量
---Original--- From: @.> Date: Sat, Jan 6, 2024 15:45 PM To: @.>; Cc: @.**@.>; Subject: Re: [qgao21/CoreDiff] if context is False, the code needs change(Issue #4)
出现这个现象有两种可能的原因:
训练不充分。采样步数越多,训练时需要遍历的t越多,因此需要更充足的训练步数。
一步预测的图像很平滑,故计算得到的PSNR、RMSE等指标更高,但是图像细节存在损失。扩散模型的优势在于恢复图像的精细结构,因此多步预测有助于生成更接近Normal-dose图像的纹理细节。你可以使用SSIM、FSIM和perceptual loss等指标进行评测,此外定性评测对于评价CT图像指标也很重要。具体关于T的消融及指标的选择在论文也有详细介绍,可以参考。
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
客气客气
哥们你们太牛了TwT,我在做本科毕业论文,刚开始接触低剂量CT重建这块,什么都不会,也不知道怎么开始,头大TwT
慢慢来,有问题欢迎交流
好的,谢谢TwT