Open aminesoulaymani opened 2 years ago
@aminesoulaymani Thanks for your interest of our work, the code will be released in this month.
I'll fully test it and send patches, I did the same with Paddlegan
Thanks for your great work! I'm really waiting for this code to be released asap :)
Hello,the results of your paper are great. We can't wait for the code.
Code has been released. Enjoy playing LIA.
Code has been released. Enjoy playing LIA.
Thanks a lot, I'll enjoy playing it. Cheers
added this model to https://github.com/iperov/DeepFaceLive
@iperov Hi, thanks! Please consider CLARIFYING on the github page that animation model is from our paper and pay attention that our model is only for NON-COMMERCIAL usage according to the license. Thanks!
it's in the code https://github.com/iperov/DeepFaceLive/blob/master/modelhub/onnx/LIA/LIA.py
class LIA:
"""
Latent Image Animator: Learning to Animate Images via Latent Space Navigation
https://github.com/wyhsirius/LIA
i will not write such info on main page.
By the way DeepFaceLive does not contain code from your repo.
made with animator https://www.youtube.com/watch?v=Ng1C78Ceyxg
@iperov can you share some insights on how to convert LIA from pytorch to onnx
it's depend on how pytorch code is unfriendly with graph and what ops are not implemented in onnx
it's depend on how pytorch code is unfriendly with graph and what ops are not implemented in onnx
@iperov I tried. But get this error
torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of convolution for kernel of unknown shape. [Caused by the value '17582 defined in (%17582 : Float(, , , , strides=[8192, 16, 4, 1], requires_grad=1, device=cpu) = onnx::Reshape(%17533, %17581), scope: networks.generator.Generator::/networks.styledecoder.Synthesis::dec/networks.styledecoder.StyledConv::conv1/networks.styledecoder.ModulatedConv2d::conv # /data/linfang/ONNXRuntime/LIA/networks/styledecoder.py:274:0 )' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Reshape'.]
it's F.conv2d(input, weight, padding=self.padding, groups=batch) that cause the error. I am confused because kernel size is explicitly defined in weight. What should i do?
it's depend on how pytorch code is unfriendly with graph and what ops are not implemented in onnx
@iperov I tried. But get this error
torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of convolution for kernel of unknown shape. [Caused by the value '17582 defined in (%17582 : Float(, , , , strides=[8192, 16, 4, 1], requires_grad=1, device=cpu) = onnx::Reshape(%17533, %17581), scope: networks.generator.Generator::/networks.styledecoder.Synthesis::dec/networks.styledecoder.StyledConv::conv1/networks.styledecoder.ModulatedConv2d::conv # /data/linfang/ONNXRuntime/LIA/networks/styledecoder.py:274:0 )' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Reshape'.]
it's F.conv2d(input, weight, padding=self.padding, groups=batch) that cause the error. I am confused because kernel size is explicitly defined in weight. What should i do?
Do you slove it ? I meet same problem.
Awesome but where is the code? FOMM is still the leader despite being 3yo! Regards