Closed fido20160817 closed 2 years ago
The following will generate normal rgb images:
for i in range(sample_copy.shape[0]):
image_tensor = sample_copy[i].unsqueeze(0)
if i == 0:
image_tensor_last = image_tensor
continue
image_tensor_last = th.cat((image_tensor_last, image_tensor), 0)
images_tensor = (image_tensor_last + 1) / 2
vutils.save_image(images_tensor.float(), out_path, nrow=args.num_samples, padding=0, normalize=False)
The following will generate normal rgb images:
for i in range(sample_copy.shape[0]): image_tensor = sample_copy[i].unsqueeze(0) if i == 0: image_tensor_last = image_tensor continue image_tensor_last = th.cat((image_tensor_last, image_tensor), 0) images_tensor = (image_tensor_last + 1) / 2 vutils.save_image(images_tensor.float(), out_path, nrow=args.num_samples, padding=0, normalize=False)
What's the difference?
The generated image is too dark from official 256x256_diffusion_uncond.pt, anybody know about this?
specifically, in 'image_sample.py': copy sample before doing transformation on sample
and save the sample_copy at last: