zsyzzsoft / co-mod-gan

[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
Other
445 stars 67 forks source link

How to convert the pretrained model to Onnx or TensorRT #35

Open GuardSkill opened 3 years ago

GuardSkill commented 3 years ago

I found the deployment of StyleGAN is very difficult because of custom ops. Could you provide some help?

zsyzzsoft commented 3 years ago

It is possible to convert the custom ops to regular ops. StyleGAN's authors have implemented this. You can pass impl='ref' to each call of e.g. this function.

GuardSkill commented 3 years ago

It is possible to convert the custom ops to regular ops. StyleGAN's authors have implemented this. You can pass impl='ref' to each call of e.g. this function.

Very thanks for your help and reply! I have tried it yesterday by passing impl='ref' in all invoked functions, but I think because I load your model by pickle, it doesn't work. And I am confused about the code about the dnnlib.tflib.Network Class. So I replace the Cuda python API function of ops with the reference function directly, And it works for me to convert it into Onnx model. I successfully convert it into Onnx model today! By the way, the Onnx model inference time is about 0.5-1s under Onnx runtime, the GPU memory occupation is less than 3G, Awsome! Thanks for your reply and concern!!! XD

duygiangdg commented 2 years ago

Hi @GuardSkill. Could you share the code to convert the pretrained model to Onnx