Open Luodian opened 5 years ago
I think my above question is related to the cycle consistency loss. Can we just load a pre-trained model fs to compute task loss between images from the source and the source image stylized as target.
@Luodian Hey, have you solved this problem or got any ideas? I am just starting to try to implement on my own:) And using this fs you have mentioned, in my opinion, is related to the semantic consistency loss?
@Luodian Hey, have you solved this problem or got any ideas? I am just starting to try to implement on my own:)
Of course you could implement on your own. Not so hard.
@jhoffman Same question, how to translate GTA images to CityScapes images by our own? I find it's different from Cyclegan's original code, especially in
cycle_gan_semantic_models.py
.I set my dataset properly using 'unaligned_datasets.py', but when I run
train.py
in cyclegan module.It raises an error about
no self.input_A_label found
.I find
self.input_A_label
the variable is specially used in svhn->mnist to label each image's class. GTA->CityScapes may not need this label? But that degenerate to original CycleGan.So how do we deal with
label
variable and relatednetCLS
network, etc?_Originally posted by @Luodian in #11 (comment)_
Hey ,so did you solve this problem?
@Luodian Hello,doyou solve this problem?
@jhoffman Same question, how to translate GTA images to CityScapes images by our own? I find it's different from Cyclegan's original code, especially in
cycle_gan_semantic_models.py
.I set my dataset properly using 'unaligned_datasets.py', but when I run
train.py
in cyclegan module.It raises an error about
no self.input_A_label found
.I find
self.input_A_label
the variable is specially used in svhn->mnist to label each image's class. GTA->CityScapes may not need this label? But that degenerate to original CycleGan.So how do we deal with
label
variable and relatednetCLS
network, etc?_Originally posted by @Luodian in #11 (comment)_
@Luodian Hello, I carefully read the code provided by the author, and found that this semantic loss is not necessarily suitable for our own data. The semantic loss proposed by the author is to keep category information unchanged before and after the conversion, but except for MNIST, no other data has this category information.
@xuhaohao86 Hello, according to the formula of cycle-consistency loss, the loss function of each pixel is calculated by L1 norm, seems to be effective for image segmentation, rather than just classification. Do you think so, please ?
@jhoffman Same question, how to translate GTA images to CityScapes images by our own? I find it's different from Cyclegan's original code, especially in
cycle_gan_semantic_models.py
.I set my dataset properly using 'unaligned_datasets.py', but when I run
train.py
in cyclegan module.It raises an error about
no self.input_A_label found
.I find
self.input_A_label
the variable is specially used in svhn->mnist to label each image's class. GTA->CityScapes may not need this label? But that degenerate to original CycleGan.So how do we deal with
label
variable and relatednetCLS
network, etc?_Originally posted by @Luodian in https://github.com/jhoffman/cycada_release/issues/11#issuecomment-482540592_