Open black-puppydog opened 6 years ago
What size of the image that you're testing on? This is only an issue when the features are small, say, if you use a mask during style transfer.
Well, I wouldn't call it an "issue" (nevermind I posted it as one here :P ) rather just a question. I was testing on the four images you have in the repo here and getting a noticable (as in, I can notice it) difference.
Like I said, it's not dramatic, I was just wondering if there was a specific rationale behind not regularizing one of the two calls.
how to train a model? plz tell me thx
I actually observed the same issue @black-puppydog. I was wondering why the regularization term eye() was so big. I believe that mathematically it's a mistake to let it how it is and it should normally be multiplied by a small epsilon value. I indeed observe huge difference in the results obtained with small regularization term(with epsilon 1e-8) and big regularization term (no epsilon)!
Hi, first off: thanks for the concise and easy to follow implementation, and congrats for the work building on it, I really enjoyed it. :)
I
util.py
you write:https://github.com/sunshineatnoon/PytorchWCT/blob/2a2b4e490394272665ced89634290e63314872d8/util.py#L49
https://github.com/sunshineatnoon/PytorchWCT/blob/2a2b4e490394272665ced89634290e63314872d8/util.py#L61
First: why not regularize both computations? I had this fail on some occasions.
The regularization term
eye()
is YUGE compared to the normalized covariance matrix. I am more used to seeingeye() * 1e-6
or thelike for regularization, and changing the term to that actually does make a noticable difference in the stylization outcome. Since we're talking about artistic style transfer here, it's hard to judge which version is better, but out of principle, the smaller the regularization, the closer to the actual whitening/coloring transform we are, no?