CompVis / latent-diffusion

High-Resolution Image Synthesis with Latent Diffusion Models
MIT License
11.85k stars 1.53k forks source link

Conditional model code in utils/sample_diffusion.lpy #295

Open junch9634 opened 1 year ago

junch9634 commented 1 year ago

Are there any plans to update the conditional model in utils/sample_diffusion.py?

``

def run(model, logdir, batch_size=50, vanilla=False, custom_steps=None, eta=None, n_samples=50000, nplog=None):
if vanilla:
    print(f'Using Vanilla DDPM sampling with {model.num_timesteps} sampling steps.')
else:
    print(f'Using DDIM sampling with {custom_steps} sampling steps and eta={eta}')

n_saved = len(glob.glob(os.path.join(logdir,'*.png')))-1
# path = logdir
if model.cond_stage_model is None:
    all_images = []

    print(f"Running unconditional sampling for {n_samples} samples")
    for _ in trange(n_samples // batch_size, desc="Sampling Batches (unconditional)"):
        logs = make_convolutional_sample(model, batch_size=batch_size,
                                         vanilla=vanilla, custom_steps=custom_steps,
                                         eta=eta)
        n_saved = save_logs(logs, logdir, n_saved=n_saved, key="sample")
        all_images.extend([custom_to_np(logs["sample"])])
        if n_saved >= n_samples:
            print(f'Finish after generating {n_saved} samples')
            break
    all_img = np.concatenate(all_images, axis=0)
    all_img = all_img[:n_samples]
    shape_str = "x".join([str(x) for x in all_img.shape])
    nppath = os.path.join(nplog, f"{shape_str}-samples.npz")
    np.savez(nppath, all_img)

else:
   raise NotImplementedError('Currently only sampling for unconditional models supported.')

print(f"sampling of {n_saved} images finished in {(time.time() - tstart) / 60.:.2f} minutes.")

``

nicolasfischoeder commented 1 year ago

Aha yes I would als need something like that!

ugiugi0823 commented 10 months ago

I made the code. Replace the scripts/sample_diffusion.py with ugiugi_sample.py Call me if there's a problem~

And use the commands below ❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️ CUDA_VISIBLE_DEVICES=0 python scripts/sample_diffusion.py -r -l --batch_size -c -e <eta: 0.0-1.0> -n_c <class number: 0~999> -c_s <cfg: 0-15> -n

Just copy~~) CUDA_VISIBLE_DEVICES=0 python scripts/sample_diffusion.py -r models/ldm/cin256/model.ckpt -l wxxk_log --batch_size 20 -c 40 -e 1.0 -n_c 0 -c_s 3.0 -n 100 ❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️

https://github.com/ugiugi0823/latent-diffusion/blob/main/scripts/ugiugi_sample.py