Closed GreameLee closed 3 days ago
Hi @JiamingLiu-Jeremy and @GreameLee ,
do the train code is under _fp16util.MixedPrecisionTrainer ? or do we need to create new code following this repository https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/train_util.py
?
Thanks!
According to the paper and some clues scattered in current version of code (listed below), the "conditional input" should simply be the images reconstructed by flitered backprojection or regularized least squares.
gt
, img_reconstructed_by_fbp
, img_reconstructed_by_rls
):
The author stated that the training can be run using OpenAI's script.
However, this can't be smoothly done for now, since the script requires a training_losses
method defined within the diffusion model, which is missing in DOLCE/blob/main/guided_diffusion/gaussian_diffusion.py
(I guess it is similar to DDPM training, but not sure if there is some customed q-sampling strategies). It might be reproduced, but it would take great pains for the reproducer if he/she does not fully follow the authors' mind and have as much experience on the LEAP packages as the authors (thereby may not be able to reproduce the same results reported in the paper).
This is an interesting work; For both the benefit of the community and the reproducibility of the paper, we do hope the authors could be so kind to release the training details soon.
For sampling process, the input is a conditional input based on a back projection method. For training process, most of models such as DPS, will train an unconditional diffusion model using clear CT images. But the paper said the training is also conditional , making it confusing. I try this data consistency in other diffusion but the performance is not good as this model.
For sampling process, the input is a conditional input based on a back projection method. For training process, most of models such as DPS, will train an unconditional diffusion model using clear CT images. But the paper said the training is also conditional , making it confusing. I try this data consistency in other diffusion but the performance is not good as this model.
DOLCE benefits from using pre-trained DPMs that take FBP or RLS as extra inputs during the reverse sampling process.
As for the input
According to the paper and some clues scattered in current version of code (listed below), the "conditional input" should simply be the images reconstructed by flitered backprojection or regularized least squares.
clues for the data format (
gt
,img_reconstructed_by_fbp
,img_reconstructed_by_rls
):clues for data processing:
- https://github.com/wustl-cig/DOLCE/blob/8420348409073bf2db4be235a14929478f97859c/limited_ct_sample.py#L68-L72
- https://github.com/wustl-cig/DOLCE/blob/8420348409073bf2db4be235a14929478f97859c/guided_diffusion/gaussian_diffusion.py#L343
- https://github.com/wustl-cig/DOLCE/blob/8420348409073bf2db4be235a14929478f97859c/dataFidelities/CTClass.py#L141
As for the training code
The author stated that the training can be run using OpenAI's script.
However, this can't be smoothly done for now, since the script requires a
training_losses
method defined within the diffusion model, which is missing inDOLCE/blob/main/guided_diffusion/gaussian_diffusion.py
(I guess it is similar to DDPM training, but not sure if there is some customed q-sampling strategies). It might be reproduced, but it would take great pains for the reproducer if he/she does not fully follow the authors' mind and have as much experience on the LEAP packages as the authors (thereby may not be able to reproduce the same results reported in the paper).This is an interesting work; For both the benefit of the community and the reproducibility of the paper, we do hope the authors could be so kind to release the training details soon.
Hi @Masaaki-75,
We are working on the Leap toolbox to make it easier to run. We have tested different training settings based on OpenAI's improved diffusion and guided diffusion (training losses, mini-batch size, learning rate, etc.), and the performance between each setting is similar. For general conditional training of DPMs, we recommend starting with the original OpenAI settings and fine-tuning the model empirically based on your specific application.
Can I know more details about how you train your model?
I would appreciate it if you could communicate with me.