ginobilinie / medSynthesisV1

This is a copy of package for medical image synthesis work with LRes-ResUnet and GAN (wgan-gp) in pytorch framework
MIT License
183 stars 45 forks source link

RuntimeError Output size is too small #21

Closed idhamari closed 4 years ago

idhamari commented 4 years ago

Hi, Thanks a lot for sharing your work. I am trying to run the code but I keep getting some errors. There are some libraries missing e.g. Gauss and it seems some stuff is out of date. I already tried many things but nothing works. Have you tested the exact code recently? It would be nice if you provide a working demo e.g. that runs with any free dataset or even some random numbers.

SurbhiKhushu commented 4 years ago

Hello, Even I am having these issues. Have you succeded in running the code

idhamari commented 4 years ago

@ginobilinie and this one :smiling_imp:

It seems this affect other users as well. If you like I can provide some fake data to test with e.g. we can use images from MINST to create a colorful version e.g. red and blue. The idea is not the problem but to provide a working and tested code. I can also build colab notebook if needed.

SurbhiKhushu commented 4 years ago

That would be great

idhamari commented 4 years ago

@ginobilinie this code downloads MINST dataset then creates two lists of bumpy arrays, one represents red color, the other represents a blue color. It can be used for demo code to this repository.

        import gzip, urllib.request
        import numpy as np,  matplotlib.pyplot as plt
        from PIL import Image

        minstUrl = "http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz"
        urllib.request.urlretrieve(minstUrl, "./train-images-idx3-ubyte.gz")
        f = gzip.open("./train-images-idx3-ubyte.gz",'r')
        image_size = 28
        num_images = 60000
        f.read(16)
        buf = f.read(image_size * image_size * num_images)
        data = np.frombuffer(buf, dtype=np.uint8).astype(np.float32)
        data = data.reshape(num_images, image_size, image_size, 1)
        trainA = []; trainB = [];
        for i in range (num_images):    
            image = np.asarray(data[i]).squeeze()
            imge = Image.fromarray(image.astype('uint8'))
            imgeRGB = imgeRGB.convert("RGB")
            imgRed =  np.array(imgeRGB)
            imgRed [:,:,1:]= 0 
            imgBlue =  np.array(imgeRGB)
            imgBlue   [:,:,:1]= 0
            trainA.append(imgRed)
            trainB.append(imgBlue)
        #endfor
        plt.imshow(trainA[0])
        plt.show()
        plt.imshow(trainB[0])
        plt.show()
SurbhiKhushu commented 4 years ago

Thank you.

Can you explain me whats happening in utils.py : def Generator_2D_slicesV1_OneEpoch():

Since I am getting the error AssertionError: 3D tensors expect 2 values for padding. I have extracted the image patches ,with size [112,64,64,64].And passing it to the model. patch_A.ndim: 4 patch_A.dtype:float 32. I havent converted my dataset to h5py .So i created my own patches and loaded using dataloader.

Thanks in advance.

idhamari commented 4 years ago

I can only trace the code when it runs. I still have different types of errors and I think the code is not tested. If you get anywhere please let me know ;)

SurbhiKhushu commented 4 years ago

Ok great

idhamari commented 4 years ago

@SurbhiKhushu this one works. I am now training the model to see how the results look like.

SurbhiKhushu commented 4 years ago

Yes, but that is using keras framework.