Open Ryhhhh opened 1 year ago
Hi there! Take guided-diffusion for example:
guided_diffusion/image_datasets.py
for your dataset.python scripts/image_train.py --data_dir your_dataset_dir --image_size 256 --num_res_blocks 3 --diffusion_steps 1000 --noise_schedule linear --lr 1e-4 --batch_size 1
You might change parameters like image size or batch size if needed. You can also change logging directory in scripts/image_train.py
by adding os.environ['OPENAI_LOGDIR'] = your_logging_dir
. guided_diffusion/diffusion.py
in DDNM's code repo. Notice that self.config.model.type == 'openai'
. You also need to create a new config file for your dataset in configs/
, and a new dataset loader in datasets/
, and add your dataset's loader in configs/__init__.py
. Remember the model's config should be same as you trained in guided_diffusion before.python main.py --ni --config imagenet.yml --path_y your_data_dir --eta 0.85 --deg "denoising" --sigma_y 0 -i celeba_deblur_g --exp your_work_dir
in your terminal to test DDNM. You can try different degradation mode in parameter --deg
, and add noise to y in --sigma_y
with a value greater than 0. Result will be saved in path --exp
.Hi there! Take guided-diffusion for example:
- First, modify
guided_diffusion/image_datasets.py
for your dataset.- Second, run code below in your terminal:
python scripts/image_train.py --data_dir your_dataset_dir --image_size 256 --num_res_blocks 3 --diffusion_steps 1000 --noise_schedule linear --lr 1e-4 --batch_size 1
You might change parameters like image size or batch size if needed. You can also change logging directory inscripts/image_train.py
by addingos.environ['OPENAI_LOGDIR'] = your_logging_dir
.- Third, load your pretrained model in
guided_diffusion/diffusion.py
in DDNM's code repo. Notice thatself.config.model.type == 'openai'
. You also need to create a new config file for your dataset inconfigs/
, and a new dataset loader indatasets/
, and add your dataset's loader inconfigs/__init__.py
. Remember the model's config should be same as you trained in guided_diffusion before.- Finishing preparation above, you can run code
python main.py --ni --config imagenet.yml --path_y your_data_dir --eta 0.85 --deg "denoising" --sigma_y 0 -i celeba_deblur_g --exp your_work_dir
in your terminal to test DDNM. You can try different degradation mode in parameter--deg
, and add noise to y in--sigma_y
with a value greater than 0. Result will be saved in path--exp
.
@LiRunyi2001 Thank you for your response! I really appreciate you taking the time to offer your advice. I will definitely give the method you suggested a try right away.
Hi, you can try https://github.com/openai/improved-diffusion, or https://github.com/openai/guided-diffusion
@wyhuai Thank you for your response!
Hi there! Take guided-diffusion for example:
- First, modify
guided_diffusion/image_datasets.py
for your dataset.- Second, run code below in your terminal:
python scripts/image_train.py --data_dir your_dataset_dir --image_size 256 --num_res_blocks 3 --diffusion_steps 1000 --noise_schedule linear --lr 1e-4 --batch_size 1
You might change parameters like image size or batch size if needed. You can also change logging directory inscripts/image_train.py
by addingos.environ['OPENAI_LOGDIR'] = your_logging_dir
.- Third, load your pretrained model in
guided_diffusion/diffusion.py
in DDNM's code repo. Notice thatself.config.model.type == 'openai'
. You also need to create a new config file for your dataset inconfigs/
, and a new dataset loader indatasets/
, and add your dataset's loader inconfigs/__init__.py
. Remember the model's config should be same as you trained in guided_diffusion before.- Finishing preparation above, you can run code
python main.py --ni --config imagenet.yml --path_y your_data_dir --eta 0.85 --deg "denoising" --sigma_y 0 -i celeba_deblur_g --exp your_work_dir
in your terminal to test DDNM. You can try different degradation mode in parameter--deg
, and add noise to y in--sigma_y
with a value greater than 0. Result will be saved in path--exp
.Thank you for your response! I really appreciate you taking the time to offer your advice. I will definitely give the method you suggested a try right away.
You are welcome. And there is a small typo in step3. It should be datasets/__init__.py
.
Thanks a lot for your nice work!
How do i train celeba_hq.ckpt with my own dataset?I would appreciate it if you could provide me with more information on how to train the model or point me towards any resources that could be helpful.