Open CiaoHe opened 1 year ago
well, I have the same question. Style transfer is not like the other tasks which you easily get the ground-truth data. Besides, it 's a bit wired to simply take the addition of the features from the encoder and the ones from the T2I-Adapters, because the content image and the style image are, in general, not spatially related.
We did not use any training pairs, only the style image. The details will be released soon.
We did not use any training pairs, only the style image. The details will be released soon.
Where would it be released? Just a simple update in README or, if necessary, posted to another repo?
Just curious about how to construct the style pairs? any mature public datasets available?