kampelmuehler / synthesizing_human_like_sketches

Code for the WACV20 paper "Synthesizing human-like sketches from natural images using a conditional convolutional decoder"
MIT License
43 stars 3 forks source link

Inverted output? #2

Closed naoto0804 closed 4 years ago

naoto0804 commented 4 years ago

Thank you for releasing the code! When I tested the code, the result is like below (the background is painted in black and the strokes are painted in white). However, for all the figures in your paper the background is painted in white. Did you manually invert the result images? Or, do I miss something important facts?

python test.py --img_path test_images/fish.png --label fish

fish_processed

kampelmuehler commented 4 years ago

Yes, you are correct. I inverted the figures because it is better for visibility and in print, however for the network the background is assumed to be all 0 arbitrarily.

naoto0804 commented 4 years ago

That makes sense, thanks!

naoto0804 commented 4 years ago

Let me ask one more. The result is indeed promising but the generated images have a lot of artifacts; Have you tried some kind of patch-wise adversarial loss like the one used in CycleGAN? Didn't it work?

kampelmuehler commented 4 years ago

You are right. The artifacts are to a large part due to the upsamling in the decoder. A patch-wise adversarial loss would certainly also improve the visual appeal. While I have tried it, it's out of the scope of this work.

naoto0804 commented 4 years ago

I see, thanks!