In the lama-regular model, the latent feature is a single Tensor, while in lama-fourier and big-lama, the latent feature is a tuple of two tensors. In our refinement code, we assumed the feature to be a tuple.
Solution
This PR adds a function that adapts the feature appropriately. We can't just keep it as-it-is because when Pytorch optimizers (like Adam) don't take a tuple as input.
Visual Results
Input Images
Big LaMa (before and after refinement)
LaMa-Fourier (before and after refinement, of course, parameters tuned to this model could make results even better)
LaMa-Regular (the one on which refinement was failing)
Summary
Issues
https://github.com/advimman/lama/issues/274 https://github.com/advimman/lama/issues/167
Problem
In the lama-regular model, the latent feature is a single Tensor, while in lama-fourier and big-lama, the latent feature is a tuple of two tensors. In our refinement code, we assumed the feature to be a tuple.
Solution
This PR adds a function that adapts the feature appropriately. We can't just keep it as-it-is because when Pytorch optimizers (like Adam) don't take a tuple as input.
Visual Results
Input Images
Big LaMa (before and after refinement)
LaMa-Fourier (before and after refinement, of course, parameters tuned to this model could make results even better)
LaMa-Regular (the one on which refinement was failing)