AntixK / PyTorch-VAE

A Collection of Variational Autoencoders (VAE) in PyTorch.
Apache License 2.0
6.44k stars 1.05k forks source link

Problem of dataset #93

Open kuailexiaohunzi opened 4 months ago

kuailexiaohunzi commented 4 months ago

I want to use the vanilla vae model to train my own dataset, and the original images are as follows d0003 My data import code is as follows image The configuration file is as follows image I modified the kld weight parameter, but the generated image effect is still not satisfactory I have learned that the Celeba dataset has attribute files, and I am not sure if this result is due to the absence of attribute files in my dataset. However, I have also found that although the parameters of the attribute files were passed in during the image generation process, they were not used I am very confused and hope to receive your help. Thank you

kuailexiaohunzi commented 4 months ago

@MisterBourbaki,I just tried to use another dataset mentioned in the code, OxfordPets, to do the test, the result is still not good, I think it may be that there is a problem with the data import method, do you have a good solution? Looking forward to your help, thank you

MisterBourbaki commented 4 months ago

Hi again, Thanks for the more detailed post! I still do not understand what you did: a full training on your custom dataset, or just an evaluation using some pre trained VAE? Also, note that you need to uncomment and modify accordingly the lines 101 through 125 of dataset.py . Last but not least, what is your aim here, some image classification, or other?

kuailexiaohunzi commented 4 months ago

Hi again, Thanks for the more detailed post! I still do not understand what you did: a full training on your custom dataset, or just an evaluation using some pre trained VAE? Also, note that you need to uncomment and modify accordingly the lines 101 through 125 of dataset.py . Last but not least, what is your aim here, some image classification, or other?

Thank you for your reply. I conducted complete training on a custom dataset, but in fact, this repo did not provide a dedicated evaluation code. Additionally, my goal is to do image generation Finally, are you sure you are referring to lines 101 to 125 in dataset. py? image

MisterBourbaki commented 4 months ago

I am referring to the lines in the original file, as you may have modified it locally. I refer I think to the lines after line 115 in your screenshot. May I ask what exactly is your problem? I still do not understand.

kuailexiaohunzi commented 4 months ago

I am referring to the lines in the original file, as you may have modified it locally. I refer I think to the lines after line 115 in your screenshot. May I ask what exactly is your problem? I still do not understand.

I'm very sorry for my negligence. My current problem is that the generated images I trained on my own dataset have poor quality. I want to know where the problem occurred

MisterBourbaki commented 4 months ago

What do you mean, poor quality? :) I am trying to understand if the issue is in the code, or if it is a more general "ML related issue", meaning: my trained model do not perform well, which is a totally different issue, and a very hard one to crack...

kuailexiaohunzi commented 4 months ago

I am referring to the lines in the original file, as you may have modified it locally. I refer I think to the lines after line 115 in your screenshot. May I ask what exactly is your problem? I still do not understand.

In addition, I just looked at lines 101 to 125 of the original file and when I conducted experiments using the Oxford Pets dataset, I did indeed remove the annotations. However, the resulting Reconstructions and Samples images were still of poor quality. However, the image quality obtained after training with the Celeba dataset was still good

MisterBourbaki commented 4 months ago

I am referring to the lines in the original file, as you may have modified it locally. I refer I think to the lines after line 115 in your screenshot. May I ask what exactly is your problem? I still do not understand.

In addition, I just looked at lines 101 to 125 of the original file and when I conducted experiments using the Oxford Pets dataset, I did indeed remove the annotations. However, the resulting Reconstructions and Samples images were still of poor quality. However, the image quality obtained after training with the Celeba dataset was still good

I hope you commented the lines below, where the CelebA dataset is used :) Otherwise, the last definitions will have priority... and you will still traind on the CelebA dataset.

kuailexiaohunzi commented 4 months ago

What do you mean, poor quality? :) I am trying to understand if the issue is in the code, or if it is a more general "ML related issue", meaning: my trained model do not perform well, which is a totally different issue, and a very hard one to crack...

Here are the results I obtained from my experiment using the Celeba dataset recons_VanillaVAE_Epoch_0 recons_VanillaVAE_Epoch_0 The above images are the reconstructed image and the original image in sequence The following are the results I obtained from my experiment using a custom dataset, followed by the reconstructed image and the original image The results I obtained from my experiment using a custom dataset are exactly what I showed you before

kuailexiaohunzi commented 4 months ago

I am referring to the lines in the original file, as you may have modified it locally. I refer I think to the lines after line 115 in your screenshot. May I ask what exactly is your problem? I still do not understand.

In addition, I just looked at lines 101 to 125 of the original file and when I conducted experiments using the Oxford Pets dataset, I did indeed remove the annotations. However, the resulting Reconstructions and Samples images were still of poor quality. However, the image quality obtained after training with the Celeba dataset was still good

I hope you commented the lines below, where the CelebA dataset is used :) Otherwise, the last definitions will have priority... and you will still traind on the CelebA dataset.

When I was experimenting with the Oxford Pets dataset, I commented out the rows of Celeba. When experimenting with a custom dataset, I changed MyCelebA to MyDataset friendly

kuailexiaohunzi commented 4 months ago

What do you mean, poor quality? :) I am trying to understand if the issue is in the code, or if it is a more general "ML related issue", meaning: my trained model do not perform well, which is a totally different issue, and a very hard one to crack...

Here are the results I obtained from my experiment using the Celeba dataset recons_VanillaVAE_Epoch_0 recons_VanillaVAE_Epoch_0 The above images are the reconstructed image and the original image in sequence The following are the results I obtained from my experiment using a custom dataset, followed by the reconstructed image and the original image The results I obtained from my experiment using a custom dataset are exactly what I showed you before

In addition, the results I obtained from experiments using the Oxford Pets dataset are as follows, followed by the reconstructed image and the original image (cropped) recons_VanillaVAE_Epoch_0 recons_VanillaVAE_Epoch_0

MisterBourbaki commented 4 months ago

Hi @kuailexiaohunzi , If I am not mistaken, in your last post you show images at epoch 0 ? Meaning, no training has occured ? I deeply believe that there is no code issue here, I do think that the model you want to use and/or the hyperparameters you choose are not well-suited for the task at hand (that I still not understant... how do you generate new images using VAE-type models?)

kuailexiaohunzi commented 4 months ago

Hi @kuailexiaohunzi , If I am not mistaken, in your last post you show images at epoch 0 ? Meaning, no training has occured ? I deeply believe that there is no code issue here, I do think that the model you want to use and/or the hyperparameters you choose are not well-suited for the task at hand (that I still not understant... how do you generate new images using VAE-type models?)

You are right, I am showing the image of epoch 0, but in fact, the image of epoch 100 is also like this, and the image effect of epoch 0 in Celeba is already quite good; Also, isn't VAE just a generative model? Isn't it something that can be used for image generation tasks

MisterBourbaki commented 4 months ago

Hi there, Here are my foughts:

  1. I do not think you have any trouble with the code itself, even with using a custom dataset. At least, there is nothing pointing in that direction.
  2. "the image of epoch 100 is also like this": withou seeing it myself, it is hard to know if it is exactly the same as epoch 0 (meaning, no learning done here), or looks the same (meaning, potentially, slow learning).
  3. There are a lot of parameters involved in Machine Learning: for instance, the size of your images (compared to the one of CelebA), whether your dataset has been cleaned or not, finding the best hyperparams (like learning rate and so on). For me, it seems you trained the model on you custom dataset without touching the config provided in the repo: a good practice for sure, but then you need to twick the hyperparams to find the best ones for you case :)
  4. Talking about params and constants, be careful as a few constans are hardcoded in the code (not a good practice, but it is not easy to write very general code).

Hope those points will help you :)

kuailexiaohunzi commented 4 months ago

Hi there, Here are my foughts:

  1. I do not think you have any trouble with the code itself, even with using a custom dataset. At least, there is nothing pointing in that direction.
  2. "the image of epoch 100 is also like this": withou seeing it myself, it is hard to know if it is exactly the same as epoch 0 (meaning, no learning done here), or looks the same (meaning, potentially, slow learning).
  3. There are a lot of parameters involved in Machine Learning: for instance, the size of your images (compared to the one of CelebA), whether your dataset has been cleaned or not, finding the best hyperparams (like learning rate and so on). For me, it seems you trained the model on you custom dataset without touching the config provided in the repo: a good practice for sure, but then you need to twick the hyperparams to find the best ones for you case :)
  4. Talking about params and constants, be careful as a few constans are hardcoded in the code (not a good practice, but it is not easy to write very general code).

Hope those points will help you :)

Thank you for providing the information. I will find time to study and identify the problem

sunny12345-bit commented 3 months ago

Hi @kuailexiaohunzi , If I am not mistaken, in your last post you show images at epoch 0 ? Meaning, no training has occured ? I deeply believe that there is no code issue here, I do think that the model you want to use and/or the hyperparameters you choose are not well-suited for the task at hand (that I still not understant... how do you generate new images using VAE-type models?)

You are right, I am showing the image of epoch 0, but in fact, the image of epoch 100 is also like this, and the image effect of epoch 0 in Celeba is already quite good; Also, isn't VAE just a generative model? Isn't it something that can be used for image generation tasks

Have you solved this problem?

kuailexiaohunzi commented 3 months ago

Hi @kuailexiaohunzi , If I am not mistaken, in your last post you show images at epoch 0 ? Meaning, no training has occured ? I deeply believe that there is no code issue here, I do think that the model you want to use and/or the hyperparameters you choose are not well-suited for the task at hand (that I still not understant... how do you generate new images using VAE-type models?)

You are right, I am showing the image of epoch 0, but in fact, the image of epoch 100 is also like this, and the image effect of epoch 0 in Celeba is already quite good; Also, isn't VAE just a generative model? Isn't it something that can be used for image generation tasks

Have you solved this problem?

The above reasons are due to the limitations of VAE itself, which cannot achieve good reconstruction and generation results for complex datasets

sunny12345-bit commented 3 months ago

Hi @kuailexiaohunzi , If I am not mistaken, in your last post you show images at epoch 0 ? Meaning, no training has occured ? I deeply believe that there is no code issue here, I do think that the model you want to use and/or the hyperparameters you choose are not well-suited for the task at hand (that I still not understant... how do you generate new images using VAE-type models?)

You are right, I am showing the image of epoch 0, but in fact, the image of epoch 100 is also like this, and the image effect of epoch 0 in Celeba is already quite good; Also, isn't VAE just a generative model? Isn't it something that can be used for image generation tasks

Have you solved this problem?

The above reasons are due to the limitations of VAE itself, which cannot achieve good reconstruction and generation results for complex datasets

感谢,想问一下还有什么效果较好的生成模型吗

kuailexiaohunzi commented 3 months ago

Hi @kuailexiaohunzi , If I am not mistaken, in your last post you show images at epoch 0 ? Meaning, no training has occured ? I deeply believe that there is no code issue here, I do think that the model you want to use and/or the hyperparameters you choose are not well-suited for the task at hand (that I still not understant... how do you generate new images using VAE-type models?)

You are right, I am showing the image of epoch 0, but in fact, the image of epoch 100 is also like this, and the image effect of epoch 0 in Celeba is already quite good; Also, isn't VAE just a generative model? Isn't it something that can be used for image generation tasks

Have you solved this problem?

The above reasons are due to the limitations of VAE itself, which cannot achieve good reconstruction and generation results for complex datasets

感谢,想问一下还有什么效果较好的生成模型吗

看你的目的是啥,不区分种类的话,可以看看扩散模型