yogeshbalaji / Generate_To_Adapt

Implementation of "Generate To Adapt: Aligning Domains using Generative Adversarial Networks"
https://arxiv.org/pdf/1704.01705.pdf
142 stars 33 forks source link

Replicating Office results #1

Open engharat opened 6 years ago

engharat commented 6 years ago

Hi, I'm trying to replicate the official paper results on Office. I would like to follow the exact paper protocol, using a pretrained on imagenet Resnet50, but I'm missing some hints in order to let the resnet feature output and the generator match resolution. In particular, I have noticed that the GAN part diverges if the generator input ( embedding output) is made too big. For example, the experiment svhn->mnist published in this repo has a final F embedding of 128 elements; increasing it will cause the algorithm to not converge. But the ResNet50 ends up with 2048 features; how should I shrink them before passing them to the generator?

engharat commented 6 years ago

I'm trying both using unmodified 2048 features or applying a bottleneck FC layer, as well as reducing images to 64x64 for the discriminator, but in all cases the GAN does not converge, exhibiting a total mode collapse. Any suggestion? :)

yogeshbalaji commented 6 years ago

Hi, For Resnet50, the images were scaled to 64x64, and the 2048 dimensional features were used for embeddings. We did observe mode collapse in our experiments, but the point is not to generate quality images but to perform adaptation in target domain. We have mentioned this in our paper too. There is a slight change in the architecture we used for Office, Syn2Real and VISDA experiments. We have included these architecture details in our supplementary material. But, I found today that the current arxiv version doesn't contain supplementary material. We will update this soon. I will also push the office code, so that it is easier for you to replicate the results. Please give me a couple of days.

engharat commented 6 years ago

Very thanks for your fast reply and support - I see how the system could perform good D.A. even if produced images are not realistic, but my trials to let it work on office always ended on chance accuracy on target, so I will use your office code when available. I was reading the paper on Arxiv, so I missed this supplementary material describing networks in details. Thanks again!

deep0learning commented 6 years ago

Hi, Thank you so much for this work. We are waiting to get the code for Office dataset. Hopefully, we will get soon. Thanks in advanced.

chengzhipanpan commented 5 years ago

Hi, I have the same difficulty in replicating the results on office. So is there updated code for the Office dataset? Thank you.

xingshuojing commented 5 years ago

Hi, I'm trying to replicate the official paper results on Office. I would like to follow the exact paper protocol, using a pretrained on imagenet Resnet50, but I'm missing some hints in order to let the resnet feature output and the generator match resolution. In particular, I have noticed that the GAN part diverges if the generator input ( embedding output) is made too big. For example, the experiment svhn->mnist published in this repo has a final F embedding of 128 elements; increasing it will cause the algorithm to not converge. But the ResNet50 ends up with 2048 features; how should I shrink them before passing them to the generator?

Hi, I'm trying to replicate the experiment svhn->mnist published in this paper, can you get the result pubulished in this paper?about 92%, why can I only get about 85% of the results?Thank you!

xingshuojing commented 5 years ago

Hello, I did not modify any parameters during the recurrence process. Why can I only get about 85% of the results in the svhn->mnist experiment? Is there any need to modify it? Looking forward to your reply

heleibin commented 4 years ago

Hi,thanks for your code! that is really helpful! But I have problems in replicating the experiment in office。 Can I get a copy of your source code for OFFICE dataset? Thanks for your early reply!

geonyeong-park commented 4 years ago

@chenwdyy @heleibin Hello, do you have any progress on reproducing results? I'm also struggling with OFFICE dataset. Although i followed up the setting in paper, It shows severe mode collapse

Dr-Zhou commented 4 years ago

@chenwdyy @heleibin Hello, do you have any progress on reproducing results? I'm also struggling with OFFICE dataset. Although i followed up the setting in paper, It shows severe mode collapse

although it shows mode collapse, it can't result in bad effects.You can see it from the paper.

geonyeong-park commented 4 years ago

@chenwdyy @heleibin Hello, do you have any progress on reproducing results? I'm also struggling with OFFICE dataset. Although i followed up the setting in paper, It shows severe mode collapse

although it shows mode collapse, it can't result in bad effects.You can see it from the paper.

Sorry for confusion, what i mean by mode collapse is that the model never generates meaningful images but just produce random noises.

heleibin commented 4 years ago

I didn’t get the result. The python program I use had encountered some problem.The vectors are not the same in first dimension.I didn’t work it out. By the way,did your problem work normally?

heleibin_ily

邮箱:heleibin_ily@163.com |

Signature is customized by Netease Mail Master

On 04/18/2020 19:51, ParkGeonYeong wrote:

@chenwdyy@heleibin Hello, do you have any progress on reproducing results? I'm also struggling with OFFICE dataset. Although i followed up the setting in paper, It shows severe mode collapse

although it shows mode collapse, it can't result in bad effects.You can see it from the paper.

Sorry for confusion, what i mean by mode collapse is that the model never generates meaningful images but just produce random noises.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

dingkmC commented 4 years ago

Anyone can give the well-trained weights (.pth file) of the model? and what's a proper training epoch?