RileyLazarou / medium

Code for medium articles
MIT License
47 stars 13 forks source link

Whether the 3D data generated by GAN is suitable for observing probability density. #3

Closed Minxiangliu closed 4 years ago

Minxiangliu commented 4 years ago

Hello, your visualization method aroused my great interest.

I am studying the use of GAN to generate 3D medical images (like CT images), and I try to observe the probability density and the actual probability density of the standard normal distribution.

I do n’t know if it is because the model crashes or is not suitable for using 3D data. The picture I produced is always similar to the following:

image

Can you give me any comments or directions? Many thank.

RileyLazarou commented 4 years ago

I'm afraid that I probably won't be able to diagnose your problem without looking at your code. But if I had to guess based off your plot, are you maybe using tanh output activation instead of linear? Tanh only outputs values in the range [-1, 1], and it looks like your real data range is bigger than that.

Minxiangliu commented 4 years ago

Thank you very much for your reply.

I'm using the method of this paper to generate a 3D CT image. Wherein, the activation function that generates the model is the use of tanh.

In addition, I also use the max() and min() functions to check the value of the generated image and the result is falling on [-1, 1].

I trained according to the hyperparameters of this paper. I only modify the noise value given to the generated model in the code. As follows:

from torch.autograd import Variable
# z_rand is random noise.
# z_rand = Variable(torch.randn((self.batch_size, self.latent_dim)), volatile=True).cuda()  # Originally
z_rand = Variable(torch.FloatTensor(self.batch_size, self.latent_dim).uniform_(-1, 1), volatile=True).cuda() # change

Finally I use the following code to generate a visual image.

import numpy as np
import scipy.stats as stats
import seaborn as sns
def uniform_to_normal(z):
    '''
    Map a value from ~U(-1, 1) to ~N(0, 1)
    '''
    norm = stats.norm(0, 1)
    return norm.ppf((z+1)/2)

z_rand = Variable(torch.FloatTensor(self.batch_size, self.latent_dim).uniform_(-1, 1), volatile=True).cuda()
test_samples = uniform_to_normal(z_rand.cpu().numpy())
x_rand = G(z_rand)  # G is generate model.
fake_samples = np.squeeze((x_rand[0]).data.cpu().numpy())
sns.kdeplot(test_samples.flatten(), c='blue', alpha=0.6, label='Real', shade=True)
sns.kdeplot(fake_samples.flatten(), c='red', alpha=0.6, label='GAN', shade=True)
plt.legend(loc=1)
plt.xlabel('Sample Space')
plt.ylabel('Probability Density')
plt.savefig(...)
plt.close() 

By the way, the resulting image after the final model training looks normal. Thank you for your reply.

Minxiangliu commented 4 years ago

OK, I can't replay your example on own model. I tried to get the result closer to what I wanted. I use the average of the training data directly as the real data curve shown. Although this way may not mean in your tutorial.

But one thing confuses me. The red circle place in the following figure exceeds the -1 value. Is this normal?

image

RileyLazarou commented 4 years ago

This is a kernel density estimation using the seaborn kdeplot. It's a handy tool for visualizing distributions without specifying histogram bin widths. The down side is that it can put mass at invalid values, like here with there being mass below negative one.