Open yxt132 opened 3 years ago
Yes, I also think the white-box model not as good as the official version.
I found the re-implemented model predicts the result has a wild color(you can see right-top area). The original repo has the same problem.
re-implemented model predicts anime-style is not obvious enough.
When I re-implementing, I spend a lot of time ensuring the color shift
gudied_filter
will have the same behavior. I think the training step is the same as the official version.
Now I try to use the same hyperparameters training both steps in the official version and my version. This model training costs a lot of time in superpixel, So I need some time to test.
If you can help find which code has a problem, I will be very grateful.
thanks for your quick response! I have not started the training yet. I will let you know if I figure out something. By the way, which superpixel method did you use in your training. I wonder how much impact the superpixel method has on the results.
My default superpixel method during training is consistent with the one mentioned in the author’s paper. He uses the superpixel method of adaptive brightness to increase the brightness of the output image, and the parameters I use are consistent with the official code, sigma=1.2 seg_num=200
, you can check my config file.
official code use selective_adacolor
superpixel method training 15999 steps results:
Not bad. I noticed some strange color in the generated pictures though.
how is the pytorch version's training? any progress?
test images | official code | pytorch version |
---|---|---|
Well done! It seems the pytorch version's results are smoother than the official tf version. What changes did you make in the pytorch version? I acutally like the pytorch version's results better. Can you update your repo and release the updated trained weights? Again, great work!
Hi. I add new weights in google drive, you can find in the readme. I also upload tensorflow version weights named whitebox-tf.zip
.
I found the strange color caused by guided filter
, but now I didn't find a better method to solve it.
the author said you could train without guided filter and add guided filter during inference.
ok. I will try if time permits
One thing that you could try with the colors is to use Color Transfer algorithm, like this from PyImageSearch.
Also, regarding cartoon noise, the guided filter should help on the post-process, by using lower values of epsilon (ε), like in WhiteBox's cartoonize.py.
@GustavoStahl thanks, I missing the test_code ε value is not equal to train_code ε. But I don't think the color transfer algorithm is needed. For this model, it needs to keep the original color as much as possible and only increase the brightness. Regarding the degree of texture, I think it can be adjusted g_gray_weight. like this.
I found the strange color caused by
guided filter
, but now I didn't find a better method to solve it.
adding np.clipping() after guided filter would solve the artifacts
Great work in re-implementing the white-box model using pytorch. However, after testing, I found the results are not good as the official version by the authors. There is still a large gap. What do you think could be the reason? Do we need the train the model longer or something else?