zhen8838 / AnimeStylized

This repo will implement a series of anime stylized algorithm. AnimeGAN, White-box Cartoonize.. etc
112 stars 19 forks source link

White-Box model results are not good #2

Open yxt132 opened 3 years ago

yxt132 commented 3 years ago

Great work in re-implementing the white-box model using pytorch. However, after testing, I found the results are not good as the official version by the authors. There is still a large gap. What do you think could be the reason? Do we need the train the model longer or something else?

zhen8838 commented 3 years ago

Yes, I also think the white-box model not as good as the official version.

  1. I found the re-implemented model predicts the result has a wild color(you can see right-top area). The original repo has the same problem. animegan_test2_out

  2. re-implemented model predicts anime-style is not obvious enough.

When I re-implementing, I spend a lot of time ensuring the color shift gudied_filter will have the same behavior. I think the training step is the same as the official version.

Now I try to use the same hyperparameters training both steps in the official version and my version. This model training costs a lot of time in superpixel, So I need some time to test.

If you can help find which code has a problem, I will be very grateful.

yxt132 commented 3 years ago

thanks for your quick response! I have not started the training yet. I will let you know if I figure out something. By the way, which superpixel method did you use in your training. I wonder how much impact the superpixel method has on the results.

zhen8838 commented 3 years ago

My default superpixel method during training is consistent with the one mentioned in the author’s paper. He uses the superpixel method of adaptive brightness to increase the brightness of the output image, and the parameters I use are consistent with the official code, sigma=1.2 seg_num=200, you can check my config file.

zhen8838 commented 3 years ago

official code use selective_adacolor superpixel method training 15999 steps results: 15999_face_photo 15999_face_result 15999_scenery_photo 15999_scenery_result

yxt132 commented 3 years ago

Not bad. I noticed some strange color in the generated pictures though.

image

how is the pytorch version's training? any progress?

zhen8838 commented 3 years ago
test images official code pytorch version
actress2 actress2 actress2_out
china6 china6 china6_out
food6 food6 food6_out
food16 food16 food16_out
liuyifei4 liuyifei4 liuyifei4_out
london1 london1 london1_out
mountain4 mountain4 mountain4_out
mountain5 mountain5 mountain5_out
national_park1 national_park1 national_park1_out
party5 party5 party5_out
party7 party7 party7_out
yxt132 commented 3 years ago

Well done! It seems the pytorch version's results are smoother than the official tf version. What changes did you make in the pytorch version? I acutally like the pytorch version's results better. Can you update your repo and release the updated trained weights? Again, great work!

zhen8838 commented 3 years ago

Hi. I add new weights in google drive, you can find in the readme. I also upload tensorflow version weights named whitebox-tf.zip.

zhen8838 commented 3 years ago

I found the strange color caused by guided filter, but now I didn't find a better method to solve it.

yxt132 commented 3 years ago

the author said you could train without guided filter and add guided filter during inference.

zhen8838 commented 3 years ago

ok. I will try if time permits

GustavoStahl commented 3 years ago

One thing that you could try with the colors is to use Color Transfer algorithm, like this from PyImageSearch.

Also, regarding cartoon noise, the guided filter should help on the post-process, by using lower values of epsilon (ε), like in WhiteBox's cartoonize.py.

zhen8838 commented 3 years ago

@GustavoStahl thanks, I missing the test_code ε value is not equal to train_code ε. But I don't think the color transfer algorithm is needed. For this model, it needs to keep the original color as much as possible and only increase the brightness. Regarding the degree of texture, I think it can be adjusted g_gray_weight. like this.

huangfuyang commented 3 years ago

I found the strange color caused by guided filter, but now I didn't find a better method to solve it.

adding np.clipping() after guided filter would solve the artifacts