-
I try to use DDIB to achieve the MR-CT image translation. I have trained diffusion models for unconditional generation of MR and CT images respectively. Given an MR image, I use MR diffusion model an…
-
图像由左到右分别是服装、cloth guidance scale分别为1.0、1.5、2.0、2.5的生成图。
![ddpm_result](https://github.com/user-attachments/assets/15ae10a7-f068-41a8-ba65-3f65582dca9f)
![ddim_result_c1](https://github.com/user-atta…
-
When performing LRA on testing datasets, why does the function "ddim_sample_step" in moel_diffusion.py return y_t_m_1 instead of y_0_reparam? Actually, in the CARD, the output prediction is y_0_repar…
-
In DDIM or DDPM, there are losses (KL-Divergence) to constrain the diffused outputs during training steps to be Gaussian distributions. I thought it is the base for DDIM sampling (the reverse process)…
-
In the Evaluation/sampling section, img_dir and seg_dir are needed to generate images, but where do I find them? Also, why do we need image inputs to do sampling? I think DDPM/DDIM starts with a noise…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
The image output looks super dark / wrong
### Steps to r…
-
Traceback (most recent call last):
File "D:\anytext\main.py", line 30, in
results, rtn_code, rtn_warning = pipe(
File "D:\anytext\anytext_pipeline.py", line 306, in __call__
samples, …
-
Below is the log I have encountered at running "python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt --config configs/stable-diffusion/v2-inference-v.y…
-
@RetroCirce @haoheliu
Hello, guys !!! :)
Thank you for publishing this work. It looks very promising and the samples are very good too.
I need your audiosr for my music WAVs but it does not …
-
Hi @karrykkk,
Thank you for the elaborate code. I was wondering whether you could share the pre-trained model checkpoints that were used on CELEBA as I couldn't find them anywhere. Thank you in ad…