lucidrains / denoising-diffusion-pytorch

Implementation of Denoising Diffusion Probabilistic Model in Pytorch
MIT License
8.02k stars 1k forks source link

Sampling #2

Closed Mshz2 closed 3 years ago

Mshz2 commented 3 years ago

Hey there,

After we trained the model, I would try to sample the images by

sampled_images = diffusion.sample(128, batch_size = 750).

My question is, do we sample or get new unique images every time that we execute the above line of code? like is the first images of batch=750 different from the 2nd time that I sample ?

Best

lucidrains commented 3 years ago

@Mshz2 it will be unique every time! what are you training it on? are you seeing good results from this technique?

Mshz2 commented 3 years ago

@Mshz2 it will be unique every time! what are you training it on? are you seeing good results from this technique?

Great! I am training welding images. My dataset is quite small (600 images), but results are quite good :) However the imbalacing in the produced images could be observed (it is not so much but there is still little bit).

lucidrains commented 3 years ago

@Mshz2 wow no way! if you have the time, you can try out the current best GAN (another type of generative modeling) here https://github.com/lucidrains/stylegan2-pytorch would be curious to know which framework you find to produce better results!

lucidrains commented 3 years ago

@Mshz2 what exactly is a "welding image"? like images of people welding? (with the fireworks from the heat, I'm imagining)

lucidrains commented 3 years ago

@Mshz2 please share an image if you could :)

Mshz2 commented 3 years ago

@Mshz2 what exactly is a "welding image"? like images of people welding? (with the fireworks from the heat, I'm imagining)

it is image of welding for two metallic part :) I would let U know which one worked better ;)

Mshz2 commented 3 years ago

@Mshz2 wow no way! if you have the time, you can try out the current best GAN (another type of generative modeling) here https://github.com/lucidrains/stylegan2-pytorch would be curious to know which framework you find to produce better results!

Have you seen this repo? the arthor says his FID is even better than styleGAN2. that is why I wondering what is difference between yours and his, because I saw U cited him in your repo.

lucidrains commented 3 years ago

@Mshz2 yup! I'm aware of that work, and it's already integrated into my repository! But it isn't a new type of GAN, just a slight modification that works with any GAN setup.

lucidrains commented 3 years ago

@Mshz2 It was independently discovered by three labs as well. It works in low-data regimes, so it doesn't quite matter if you have a lot of data (> 100k samples)

lucidrains commented 3 years ago

@Mshz2 I think denoising diffusion may be the best option in low-data scenarios though, without the use of augmentations

lucidrains commented 3 years ago

@Mshz2 so you won't share a sample? :(

Mshz2 commented 3 years ago

@Mshz2 so you won't share a sample? :(

I am not allowed to share the images :((

lucidrains commented 3 years ago

ok :(

Mshz2 commented 3 years ago

Do U think if train it with only 100 images, does the network works fine too? And how to calculate fid score compare to GANs? I used this repo to calculate fid score between 750 fake images and 600 real images (both have same resolution). It gave FID = 164.32.

Best

lucidrains commented 3 years ago

@Mshz2 very cool :D However, that's not too great an FID score lol

You can try the state of the art GAN here, with FID scores calculated with one command! https://github.com/lucidrains/stylegan2-pytorch#research

lucidrains commented 3 years ago

@Mshz2 what am I looking at in that image there? is that a microscopic image?

Mshz2 commented 3 years ago

@Mshz2 what am I looking at in that image there? is that a microscopic image?

It is zoomed in, but not microscopic.