Closed aschneid42 closed 3 years ago
same question,Have you solved the problem?I trained my own RGB datasets without inst(and as readMe said, I renamed data folder to train_A, train_B, test_A),it seems good in ./checkpoint(training intermediate results), but when I test(with params --no_instance --label_nc 0 like training), I got same warning(Pretrained network G has fewer layers; The following are not initialized: ['model', 'model1_1']) and terrible result(actually I think it's wrong result because of wrong my config) thx!
solved it , just remove the param --ngf 32
@joashchn If I use the same pretrained model, but remove the flag --ngf 32, I get a new warning:
Pretrained network G has fewer layers; The following are not initialized: ['model', 'model1_1', 'model1_2']
The results change, but still look terrible. This time more cartoon-like.
But perhaps what you are saying is that I still need to re-train the model also with no instances? And also remove the --ngf 32 during training? Thanks.
@joashchn If I use the same pretrained model, but remove the flag --ngf 32, I get a new warning:
Pretrained network G has fewer layers; The following are not initialized: ['model', 'model1_1', 'model1_2']
The results change, but still look terrible. This time more cartoon-like.
But perhaps what you are saying is that I still need to re-train the model also with no instances? And also remove the --ngf 32 during training? Thanks.
hi, I removed --ngf 32 because I did not use the pre-trained model, the default ngf of in base_option.py is 64,so when I test on my own training model,I copied test command line from readme include '--ngf 32' caused wrong result. if u use pre-trained model,you should use the same ngf as pre-trained model,probably 32.
Ok, thanks for the tips. But if you trained your own model with --no_instance, then my question is still:
Does something need to be modified to use the pretrained model for inference with --no_instance flag, or does the model need to be retrained with --no_instance in order to do inference in this way? @tcwang0509
solved it , just remove the param --ngf 32
@joashchn What kind of dataset you are generating? How are the results after training on train_A train_B? Are you only using paired images like train_a and corresponding train_b images without label maps or instances?
I am working on the same problem. I want to generate infrared images from visual images. I have paired images for training. So i just want to know about results. I have tried pix2pix before but the results was not good. So i just want to know how are the results using pix2pixHD. if it is better then i will train model otherwise will not waste time on it. Thanks in advance
@aschneid42 Hello! I was wondering the same question as you. I want to train model on cityscapes labels with instance and test my own labels without instance. So is it possible to get good results when training with instance and testing with no_instance??
yes but if the technique of image deep prior has to be used and you will get the accurate rading if you want to do your way.
solved it , just remove the param --ngf 32
@joashchn What kind of dataset you are generating? How are the results after training on train_A train_B? Are you only using paired images like train_a and corresponding train_b images without label maps or instances?
I am working on the same problem. I want to generate infrared images from visual images. I have paired images for training. So i just want to know about results. I have tried pix2pix before but the results was not good. So i just want to know how are the results using pix2pixHD. if it is better then i will train model otherwise will not waste time on it. Thanks in advance
Hi! How about your work with generating infrared images from visual images? I am trying it too but can't get useful models. It's well working on the training datasets, but when I trying to generate infrared images from another datasets, the result is very bad. So I just want to know how you solved this problem in the end. Thanks in advance!
Ok, thanks for the tips. But if you trained your own model with --no_instance, then my question is still:
Does something need to be modified to use the pretrained model for inference with --no_instance flag, or does the model need to be retrained with --no_instance in order to do inference in this way? @tcwang0509
I have the same issue. I also did the same like you explained before and trained my dataset with --no_instance and --label_nc 0. When I try to load the trained generator, I get Pretrained network G has fewer layers; The following are not initialized: ['model', 'model1_1', 'model1_2']
.
With model1_1 and model1_2 not initialized, generator forward will not run and give tensor shape issues as the model_upsample and model_downsample will not work.
During testing I run, CUDA_LAUNCH_BLOCKING=1 python test.py --name faces_train --gpu_ids 2 --dataroot ./datasets/train_faces --resize_or_crop none --no_instance --how_many 1 --netG local --verbose --ntest 1 --label_nc 0
and ran into this issue.
Can you provide a sample command to run test.py on custom dataset to translate images from test_A with no instance maps?
The inference runs on test_A images but the results are really bad.
The model was trained using instance maps and cannot be used for inference without instance maps. If you want to do that, please retrain your own model, or finetune the existing model.
解决了,只需删除参数 --ngf 32
您正在生成哪种数据集?在train_A train_B上训练后的结果如何?您是否只使用配对图像(如 train_a)和相应的train_b图像,而不使用标签映射或实例? 我正在研究同样的问题。我想从可见光图像生成红外图像。我有用于训练的配对图像。所以我只想知道结果。我以前尝试过pix2pix,但结果并不好。所以我只想知道使用 pix2pixHD 的结果如何。如果它更好,那么我会训练模型,否则不会浪费时间。提前致谢
你好!您如何从可见光图像生成红外图像?我也在尝试,但无法获得有用的模型。它在训练数据集上运行良好,但是当我尝试从另一个数据集生成红外图像时,结果非常糟糕。所以我只想知道你最后是怎么解决这个问题的。提前致谢!
solved it , just remove the param --ngf 32
@joashchn What kind of dataset you are generating? How are the results after training on train_A train_B? Are you only using paired images like train_a and corresponding train_b images without label maps or instances? I am working on the same problem. I want to generate infrared images from visual images. I have paired images for training. So i just want to know about results. I have tried pix2pix before but the results was not good. So i just want to know how are the results using pix2pixHD. if it is better then i will train model otherwise will not waste time on it. Thanks in advance
Hi! How about your work with generating infrared images from visual images? I am trying it too but can't get useful models. It's well working on the training datasets, but when I trying to generate infrared images from another datasets, the result is very bad. So I just want to know how you solved this problem in the end. Thanks in advance!
Hello, I am also facing the same problem now. It's well working on the training datasets, but when I trying to generate infrared images from another datasets, the result is very bad. Have you resolved it?
你好,邮件已收到,祝你万事如意,生活愉快!
If I use the pretrained model to do inference on a new cityscapes dataset, I use this command and the results look great!
python test.py --name label2city_1024p --netG local --ngf 32 --resize_or_crop none
However, if I try to do inference on this same new dataset, with the --no_instance flag,
python test.py --name label2city_1024p --netG local --ngf 32 --resize_or_crop none --no_instnace
, then I get a warning saying:It will still continue to produce synthetic outputs, but they look terrible.
Does something need to be modified to use the pretrained model for inference with --no_instance flag, or does the model need to be retrained with --no_instance in order to do inference in this way?
Thanks for your help!