facebookincubator / AITemplate

AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
Apache License 2.0
4.56k stars 370 forks source link

automatically parsing Pytorch module to AIT module, not define AIT module layer by layer #16

Open khlee369 opened 2 years ago

khlee369 commented 2 years ago

I referred official tutorial but I can't find the automatic conversion from PyTorch module to AIT module like ONNX->TensorRT.

https://facebookincubator.github.io/AITemplate/tutorial/how_to_infer_pt.html#define-a-pytorch-module

Is there any parser or converter building AIT module from PyTorch module directly?

antinucleon commented 2 years ago

It is not open sourced in this release.

There is a FX2AIT project probably will be open sourced in the future.

On Tue, Oct 4, 2022 at 05:49 Lee kwang ho @.***> wrote:

I referred official tutorial but I can't find the automatic conversion from PyTorch module to AIT module like ONNX->TensorRT.

https://facebookincubator.github.io/AITemplate/tutorial/how_to_infer_pt.html#define-a-pytorch-module

Is there any parser or converter building AIT module from PyTorch module directly?

— Reply to this email directly, view it on GitHub https://github.com/facebookincubator/AITemplate/issues/16, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJTLXQKUYTIHZ7COVRFOI3WBQRUBANCNFSM6AAAAAAQ4RKRQQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

-- Bing Xu

addisonklinke commented 2 years ago

Strongly agree with @khlee369 on this feature. Right now the conversion example for ResNet seems incredibly tedious, so I have low confidence I could make something work for a custom architecture (even though I have experience converting PyTorch to other frameworks like OpenVino, CoreML, and ONNX). Until there's a more generic parser, I think it's going to be tough to convince people to switch

Overall the pitch of the project (particularly avoiding Tensor RT) sounds quite appealing, so I hope this progresses!

zack-ch commented 1 year ago

Extremely want to this feature. Also, do you have plan to support different frameworks other than PyTorch (eg. ONNX) parse to AITemplate model?

larme commented 1 year ago

Now with pytorch 2.0's torchdynamo's more accurate FX graph generation, maybe a open sourced FX2AIT will be even more helpful?