Closed nickk124 closed 5 months ago
Hi :)
Thank you for your interest in our work.
Other than the suggestions in #11, you could also try reducing $\tau$, which controls the amount of randomness in generation trajectories. We've noticed that with high $\tau$, the generator often has difficulty dealing with large amounts of noise in trajectories, and generates noisy or blurry images.
Let us know if this helps!
Awesome, thanks for the suggestion :) I'm assuming that is the --tau
argument. TBH I'm unsure from the paper if it's sufficient to just perform inference with my trained model with a different $\tau$, or do I need to re-train from scratch with the new $\tau$?
Thanks for the help!
I'd say, you can try both. But, for the model to be theoretically correct, you need to re-train from scratch.
Also, you might want to try evaluating images at all NFEs, i.e., NFE = 1, .., 5, as UNSB sometimes achieves the best translation quality at intermediate NFEs.
Thanks for the tips! And yeah I saw no difference when applying the different tau only at inference, so I'll try training from scratch. Closing this issue for now; thanks for answering my questions.
Hello, Thank you for your awesome paper and code! I trained your model for breast MRI translation, and the translations looks great overall, but I do notice a bit of blurriness/loss of fine detail in the outputs, even if the image overall looks good. See for example an input image from domain A (left) and the translated output for domain B (right); note the blurriness in the output.
I was wondering if you suggest ways to help with the bluriness, especially with this translation task that requires only very subtle changes to the image. From your comment here https://github.com/cyclomon/UNSB/issues/11#issuecomment-1833076383, I think maybe a solution is to use a higher-resolution bottleneck, by using fewer downsampling/upsampling layers? What do you think?
Thanks!