jhoffman / cycada_release

Code to accompany ICML 2018 paper
BSD 2-Clause "Simplified" License
561 stars 126 forks source link

@jhoffman Same question, how to translate GTA images to CityScapes images by our own? #20

Open Luodian opened 5 years ago

Luodian commented 5 years ago

@jhoffman Same question, how to translate GTA images to CityScapes images by our own? I find it's different from Cyclegan's original code, especially in cycle_gan_semantic_models.py.

I set my dataset properly using 'unaligned_datasets.py', but when I run train.py in cyclegan module.

It raises an error about no self.input_A_label found.

I find self.input_A_label the variable is specially used in svhn->mnist to label each image's class. GTA->CityScapes may not need this label? But that degenerate to original CycleGan.

So how do we deal with label variable and related netCLS network, etc?

_Originally posted by @Luodian in https://github.com/jhoffman/cycada_release/issues/11#issuecomment-482540592_

Luodian commented 5 years ago

I think my above question is related to the cycle consistency loss. Can we just load a pre-trained model fs to compute task loss between images from the source and the source image stylized as target.

jianingwangind commented 5 years ago

@Luodian Hey, have you solved this problem or got any ideas? I am just starting to try to implement on my own:) And using this fs you have mentioned, in my opinion, is related to the semantic consistency loss?

Luodian commented 5 years ago

@Luodian Hey, have you solved this problem or got any ideas? I am just starting to try to implement on my own:)

Of course you could implement on your own. Not so hard.

KAISER1997 commented 4 years ago

@jhoffman Same question, how to translate GTA images to CityScapes images by our own? I find it's different from Cyclegan's original code, especially in cycle_gan_semantic_models.py.

I set my dataset properly using 'unaligned_datasets.py', but when I run train.py in cyclegan module.

It raises an error about no self.input_A_label found.

I find self.input_A_label the variable is specially used in svhn->mnist to label each image's class. GTA->CityScapes may not need this label? But that degenerate to original CycleGan.

So how do we deal with label variable and related netCLS network, etc?

_Originally posted by @Luodian in #11 (comment)_

Hey ,so did you solve this problem?

hankhaohao commented 4 years ago

@Luodian Hello,doyou solve this problem?

@jhoffman Same question, how to translate GTA images to CityScapes images by our own? I find it's different from Cyclegan's original code, especially in cycle_gan_semantic_models.py.

I set my dataset properly using 'unaligned_datasets.py', but when I run train.py in cyclegan module.

It raises an error about no self.input_A_label found.

I find self.input_A_label the variable is specially used in svhn->mnist to label each image's class. GTA->CityScapes may not need this label? But that degenerate to original CycleGan.

So how do we deal with label variable and related netCLS network, etc?

_Originally posted by @Luodian in #11 (comment)_

hankhaohao commented 4 years ago

@Luodian Hello, I carefully read the code provided by the author, and found that this semantic loss is not necessarily suitable for our own data. The semantic loss proposed by the author is to keep category information unchanged before and after the conversion, but except for MNIST, no other data has this category information.

ZHE-SAPI commented 2 years ago

@xuhaohao86 Hello, according to the formula of cycle-consistency loss, the loss function of each pixel is calculated by L1 norm, seems to be effective for image segmentation, rather than just classification. Do you think so, please ?