Closed ChenXiao61 closed 4 years ago
Hi @ChenXiao61, please try setting --checkpoint_factor
to a higher value so that models are saved less frequently.
Thanks a lot @owang . I solved this problem. Another question is may I set --depth to generate higher resolution? Now I generate 256x256. And how can I reload the parameters trained latest time?
@ChenXiao61, Yes, you can change the depth in order to generate higher or lower resolution images. Please note that the resolution doubles after increasing the depth by 1. So, currently for 256 x 256
you would have a depth of 7
so increasing it to 8
, you would generate 512 x 512
images. Also, for the depth of 1
(smallest depth), the code generates 4 x 4
images.
In order to resume the training, you need to provide the model files .pth
for the 5 different parameters, viz. --generator_file
, --discriminator_file
, --gen_optim_file
, --dis_optim_file
and the --gen_shadow_file
. And then set your --start
accordingly.
Hope this helps.
Please feel free to ask if you have any questions.
Best regards, @akanimax
Hello, @akanimax .Thank you very much.
Hello,@akanimax, I have recurred the BMSG-GAN on my own data and gotten excellent generated data. I still have smoe questions:
1) Does BMSG-GAN train more time the generated files better? Or when to stop training and generate data using pretrained weights is the best?
2) How does the number of train data influences the result? As far as I know, the more train data, the better, but the medical images are difficult to obtain.
3) Why the samples operating by train.py is better than single samples operating by generate_samples.py using pretrained weights?
Hope for your reply.
Kindest regards, Xiao Chen
------------------ 原始邮件 ------------------ 发件人: "Animesh Karnewar"notifications@github.com; 发送时间: 2019年5月21日(星期二) 晚上10:44 收件人: "akanimax/BMSG-GAN"BMSG-GAN@noreply.github.com; 抄送: "X"mylovechenxiao@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [akanimax/BMSG-GAN] No space left on disk (#12)
@ChenXiao61, Yes, you can change the depth in order to generate higher or lower resolution images. Please note that the resolution doubles after increasing the depth by 1. So, currently for 256 x 256 you would have a depth of 7 so increasing it to 8, you would generate 512 x 512 images. Also, for the depth of 1 (smallest depth), the code generates 4 x 4 images.
In order to resume the training, you need to provide the model files .pth for the 5 different parameters, viz. --generator_file, --discriminator_file, --gen_optim_file, --dis_optim_file and the --gen_shadow_file. And then set your --start accordingly.
Hope this helps.
Please feel free to ask if you have any questions.
Best regards, @akanimax
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@ChenXiao61,
Glad to hear that you obtained great results. I'll try to address your questions below:
1.) About GANs In general, you can't judge the quality of the generator based on the time it is trained. The way to find out which model to use among the saved checkpoints is to calculate the FID for all these models and then use the one with the lowest FID. Please check out https://github.com/mseitzer/pytorch-fid code to calculate the fid.
2.) Usually, more data does give better results.
3.) Please check if you are applying the correct --out_depth
while using the generate_samples.py
script. It's possible that you are generating at a lower depth (resolution ).
Hope this helps.
BTW, if your data isn't private and you don't mind, would you like to contribute your samples to the repo like @huangzh13? Check the README's other contributions section.
Best regards, @akanimax
Thanks for your help, @akanimax 1) I have calculated FID of the original data and generated data using the code you provided me. And at https://www.ctolib.com/bioinf-jku-TTUR.html, I read such information:
And at https://github.com/mseitzer/pytorch-fid read this:
The number of my original data is more than 200, and the number of generated data equals. I set the --dims=192, --batch-size=5 and I got the FID with 8.78. Did I do this rightly? I also try setting --dims=164,--dims=768,--dims=2048,the FID got 2.20, 0.69 and 125.02 respectively.
What's more, should I generate more images than the original data to calculate FID?
2)I find that the GAN generates poor abnormal fundus images, which the number is more than 400. Would you mind providing me with suggestions?
Best regards, @ChenXiao61
------------------ 原始邮件 ------------------ 发件人: "Animesh Karnewar"notifications@github.com; 发送时间: 2019年5月28日(星期二) 中午11:52 收件人: "akanimax/BMSG-GAN"BMSG-GAN@noreply.github.com; 抄送: "X"mylovechenxiao@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [akanimax/BMSG-GAN] No space left on disk (#12)
@ChenXiao61,
Glad to hear that you obtained great results. I'll try to address your questions below:
1.) About GANs In general, you can't judge the quality of the generator based on the time it is trained. The way to find out which model to use among the saved checkpoints is to calculate the FID for all these models and then use the one with the lowest FID. Please check out https://github.com/mseitzer/pytorch-fid code to calculate the fid.
2.) Usually, more data does give better results.
3.) Please check if you are applying the correct --out_depth while using the generate_samples.py script. It's possible that you are generating at a lower depth (resolution ).
Hope this helps.
BTW, if your data isn't private and you don't mind, would you like to contribute your samples to the repo like @huangzh13? Check the README's other contributions section.
Best regards, @akanimax
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Closing this issue due to inactivity. Please feel free to comment here if you encounter any more problems cheers :beers:!
Hi, @akanimax ,thanks for sharing your code. I want to train my own data (colour fundus photos) to data augmentation. the number and resolution of my data set are 235 and 1024*1024. I trained more than 260 epochs, but generated images are still bad and happened the error "No space left on disk" , it is that the saved models and samples used too much space of my disk(170G). How can I solve the problems? Thanks a lot.