Open IEWbgfnYDwHRoRRSKtkdyMDUzgdwuBYgDKtDJWd opened 4 years ago
Thank you for your interest. As multi-source you mean swap-parts from different people? Or use the same parts from single person, and just use multiple images of that person to improve quality?
As multi-source you mean swap-parts from different people? Or use the same parts from single person, and just use multiple images of that person to improve quality?
Either or, thanks for reply.
Tried to improve the quality with many sources. At test time weighted average of transformed frames could be used, but this produce blurier results. So probably need to train with multi-source.
Tried to improve the quality with many sources. At test time weighted average of transformed frames could be used, but this produce blurier results. So probably need to train with multi-source.
Ah, well that makes sense. thanks again sir.
Great job on this, and thanks for the straightforward and working notebook. Much appreciation for sharing it. I have had a lot of fun testing out various settings/inputs and doing different experiments. The new coseg feature is well done, I like how you provided the different models for comparison. I have found that the fidelity on the supervised model to be quite amazing even with the roughest of inputs, (sometimes hard=false actually helps with rough inputs I have found as well).
The 5 and 10 part models have their uses as well, you can really see how segmentation complexity and realism are highly dependent on the input - this is why I am glad you included them all. I have many thoughts on all of this, but I am mainly wondering if you have had a chance to try out using multiple source inputs? I have seen this on a one-shot demo before, although the results werent as great as what I have seen with your notebook thats for sure. thanks again!