akanimax / pro_gan_pytorch

Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"
MIT License
536 stars 100 forks source link

Running on custom dataset #23

Closed jyopari closed 2 years ago

jyopari commented 5 years ago

Hello,

How would I train this using my own dataset?

Thanks

akanimax commented 5 years ago

HI @jyopari,

Please check this repository which contains some examples for other datasets -> https://github.com/akanimax/pro_gan_pytorch-examples.

Also there is an ongoing issue which could be helpful for you. -> https://github.com/akanimax/pro_gan_pytorch-examples/issues/2.

Please feel free to ask if you have any questions.

I know I am due writing a detailed document for the API usage 😄. Have just been busy with other work. Hope this helps for now.

Best regards, @akanimax

jyopari commented 5 years ago

also I tried the lfw.conf file, but i get this error. 'EasyDict' object has no attribute 'folder_distributed'

On Sun, Mar 31, 2019 at 3:26 PM Animesh Karnewar notifications@github.com wrote:

HI @jyopari https://github.com/jyopari,

Please check this repository which contains the some examples for other datasets -> https://github.com/akanimax/pro_gan_pytorch-examples.

Also there is an ongoing issue which could be helpful for you. -> akanimax/pro_gan_pytorch-examples#2 https://github.com/akanimax/pro_gan_pytorch-examples/issues/2.

Please feel free to ask if you have any questions.

I know I am due writing a detailed document for the API usage 😄. Have just been busy with other work. Hope this helps for now.

Best regards, @akanimax https://github.com/akanimax

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/akanimax/pro_gan_pytorch/issues/23#issuecomment-478372469, or mute the thread https://github.com/notifications/unsubscribe-auth/ALEF5nXqSPaab87UaD1Y2WCAfM6Aha3kks5vcQvYgaJpZM4cUdlS .

jyopari commented 5 years ago

I got it to work, and my last question: can you explain what GAN_GEN_SHADOW is vs, GAN_GEN_OPTIM, and 0 vs 1.

Thanks!

On Sun, Mar 31, 2019 at 5:09 PM Jyo Pari play.jyo@gmail.com wrote:

also I tried the lfw.conf file, but i get this error. 'EasyDict' object has no attribute 'folder_distributed'

On Sun, Mar 31, 2019 at 3:26 PM Animesh Karnewar notifications@github.com wrote:

HI @jyopari https://github.com/jyopari,

Please check this repository which contains the some examples for other datasets -> https://github.com/akanimax/pro_gan_pytorch-examples.

Also there is an ongoing issue which could be helpful for you. -> akanimax/pro_gan_pytorch-examples#2 https://github.com/akanimax/pro_gan_pytorch-examples/issues/2.

Please feel free to ask if you have any questions.

I know I am due writing a detailed document for the API usage 😄. Have just been busy with other work. Hope this helps for now.

Best regards, @akanimax https://github.com/akanimax

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/akanimax/pro_gan_pytorch/issues/23#issuecomment-478372469, or mute the thread https://github.com/notifications/unsubscribe-auth/ALEF5nXqSPaab87UaD1Y2WCAfM6Aha3kks5vcQvYgaJpZM4cUdlS .

akanimax commented 5 years ago

Good to know that you got it to work :smile: :+1:. Yes, sure: GAN_GEN_SHADOW is the model which contains the stable weights for the Exponential Moving Averaging of the Generator parameters. OPTIM contains the adam optimizer state and 0 is a depth of (4_x_4) and 1 is a depth of (8_x_8). Please feel free to ask if you have any further questions.

Best regards, @akanimax

jyopari commented 5 years ago

Thanks for the info! I had another question, the number of epochs seemed low in the example conf files? I wanted to train progan for 256x256, what settings should I use in my config file?

On Tue, Apr 2, 2019 at 5:02 AM Animesh Karnewar notifications@github.com wrote:

Good to know that you got it to work 😄 👍. Yes, sure: GAN_GEN_SHADOW is the model which contains the stable weights for the Exponential Moving Averaging of the Generator parameters. OPTIM contains the adam optimizer state and 0 is a depth of (4_x_4) and 1 is a depth of (8_x_8). Please feel free to ask if you have any further questions.

Best regards, @akanimax https://github.com/akanimax

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/akanimax/pro_gan_pytorch/issues/23#issuecomment-478908615, or mute the thread https://github.com/notifications/unsubscribe-auth/ALEF5nnf0_TkI34mbAX_oNPhTUr1whYmks5vcxy8gaJpZM4cUdlS .

jyopari commented 5 years ago

Also note that my training set is 565 images, its small so that I want to see progan at least overfit.

On Tue, Apr 2, 2019 at 3:48 PM Jyo Pari play.jyo@gmail.com wrote:

Thanks for the info! I had another question, the number of epochs seemed low in the example conf files? I wanted to train progan for 256x256, what settings should I use in my config file?

On Tue, Apr 2, 2019 at 5:02 AM Animesh Karnewar notifications@github.com wrote:

Good to know that you got it to work 😄 👍. Yes, sure: GAN_GEN_SHADOW is the model which contains the stable weights for the Exponential Moving Averaging of the Generator parameters. OPTIM contains the adam optimizer state and 0 is a depth of (4_x_4) and 1 is a depth of (8_x_8). Please feel free to ask if you have any further questions.

Best regards, @akanimax https://github.com/akanimax

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/akanimax/pro_gan_pytorch/issues/23#issuecomment-478908615, or mute the thread https://github.com/notifications/unsubscribe-auth/ALEF5nnf0_TkI34mbAX_oNPhTUr1whYmks5vcxy8gaJpZM4cUdlS .

talvasconcelos commented 5 years ago

@akanimax or @jyopari, sorry to dig this up, i'm trying out this on a colab notebook and want to use it with my own dataset. I'm trying this to load my data: `def setup_data(): """ setup the CIFAR-10 dataset for training the CNN :param batch_size: batch_size for sgd :param num_workers: num_readers for data reading :param download: Boolean for whether to download the data :return: classes, trainloader, testloader => training and testing data loaders """

data setup:

transforms = tv.transforms.ToTensor()

data = tv.datasets.ImageFolder(root=data_path,
                               transform=transforms)

trainset = th.utils.data.DataLoader(data, shuffle=True, num_workers=1)

return trainset`

but running the pro-gan gives an error: `Starting the training process ...

Currently working on Depth: 0 Current resolution: 4 x 4

Epoch: 1

TypeError Traceback (most recent call last)

in () 22 epochs=num_epochs, 23 fade_in_percentage=fade_ins, ---> 24 batch_sizes=batch_sizes 25 ) 26 # ====================================================================== 2 frames /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch) 606 raise Exception("KeyError:" + batch.exc_msg) 607 else: --> 608 raise batch.exc_type(batch.exc_msg) 609 return batch 610 TypeError: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 99, in samples = collate_fn([dataset[i] for i in batch_indices]) TypeError: 'DataLoader' object does not support indexing` Is there any example to load a folder with images, that's what i have: dir1/dir2/10k images... Thanks, Tiago
akanimax commented 5 years ago

@talvasconcelos, Seems to me that you are not using the pyTorch's ImageFolder dataset properly. Could you please provide the complete error stack for this error?

talvasconcelos commented 5 years ago

I've solved it, i was messing the depth for my image size, 4 instead of 6. Also i pulled the examples DataLoader module to load the images, it's working now. I get an error when setting loss='hinge', though, going with the default!

I just feel that i got better results from DCGAN... need to experiment more!

akanimax commented 5 years ago

@talvasconcelos, ProGAN definitely gives a lot better results than DCGAN at higher resolutions. Only caveat is that you need to set the progressive schedule (no. of epochs per resolution) very carefully. Otherwise, the whole training gets messed up. You could also try our new paper which addresses this caveat MSG-GAN. There is no hyperparameter tuning required and it is very easy to use.

Could you please share the error that you are receiving for loss = hinge? I'll address this in the next version of the package as well.

talvasconcelos commented 5 years ago

Giving it another try, with longer training. Trying now with parameters (using example form git):

depth = 6

num_epochs = [200, 200, 200, 300, 500, 300]
fade_ins = [50, 50, 50, 50, 50, 50]
batch_sizes = [256, 256, 256, 128, 64, 32]
latent_size = 256

Dataset = dl.FlatDirectoryImageDataset

dataset = Dataset(
    data_dir=data_path,
    transform=dl.get_transform()
)

print("total examples in training: ", len(dataset))

pro_gan.train(
    dataset=dataset,
    epochs=num_epochs,
    fade_in_percentage=fade_ins,
    batch_sizes=batch_sizes,
    feedback_factor=1,
    log_dir=f"{base_dir}/logs/",
    sample_dir=f"{base_dir}/images/",
    save_dir=f"{base_dir}/models/")

this is the error i get when i set loss='hinge': `TypeError Traceback (most recent call last)

in () 33 log_dir=f"{base_dir}/logs/", 34 sample_dir=f"{base_dir}/images/", ---> 35 save_dir=f"{base_dir}/models/", loss="hinge") 36 37 # :param loss: the loss function to be used TypeError: train() got an unexpected keyword argument 'loss'` Will definitely take a look at MSG-GAN, and try to run it on colab.
ss32 commented 4 years ago

also I tried the lfw.conf file, but i get this error. 'EasyDict' object has no attribute 'folder_distributed'

@jyopari how did you solve this?

ss32 commented 4 years ago

also I tried the lfw.conf file, but i get this error. 'EasyDict' object has no attribute 'folder_distributed'

@jyopari how did you solve this?

This is solved by adding a line to configs/mnist.conf

folder_distributed: False  # whether images are distributed among folders
akanimax commented 2 years ago

Closing due to inactivity. Cheers!