pytorch / glow

Compiler for Neural Network hardware accelerators
Apache License 2.0
3.2k stars 688 forks source link

how to perform quantization of my onnx or pytorch model. #3570

Open WilliamZhaoz opened 4 years ago

WilliamZhaoz commented 4 years ago

now I have my own pytorch and onnx model. how can I quantize it using glow in python API, and then how can I inference it in glow? is there any clear doc? thanks.

jackm321 commented 4 years ago

Hi @WilliamZhaoz, for onnx models you should be able to use the Loader or any of the example binaries that use the Loader (see this doc)

For PyTorch models, this isn't currently available in torch_glow, but we have plans to add this feature. If you are interested in implementing it please go for it otherwise I will probably be able to do it next week.

WilliamZhaoz commented 4 years ago

thanks Jack, I have a onnx model, converted from pytorch model, but my task is not a image classification task, in such a situation, how can I perform my model, since I find only a image classification interface in glow.

Jack Montgomery notifications@github.com 于2019年10月1日周二 下午5:58写道:

Hi @WilliamZhaoz https://github.com/WilliamZhaoz, for onnx models you should be able to use the Loader or any of the example binaries that use the Loader (see this doc https://github.com/pytorch/glow/blob/master/docs/Quantization.md#how-to-perform-nn-conversion )

For PyTorch models, this isn't currently available in torch_glow, but we have plans to add this feature. If you are interested in implementing it please go for it otherwise I will probably be able to do it next week.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pytorch/glow/issues/3570?email_source=notifications&email_token=AIWSN2O4US55V5VRFOKUXO3QMMNLJA5CNFSM4I3RILKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAAWRRI#issuecomment-536963269, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2MAM725LPUGCTWTFT3QMMNLJANCNFSM4I3RILKA .

jackm321 commented 4 years ago

@WilliamZhaoz apologies for the delayed response. You should be able to use ImageClassifier or TextTranslator as an example for using Loader to load a model from file and run it.

WilliamZhaoz commented 4 years ago

thanks Jack, so you mean that, I can write the data interface to run my onnxmodel, only need to re-write loader, for other model, even not is a ImageClassifier or TextTranslator, it can works well?

Jack Montgomery notifications@github.com 于2019年10月17日周四 上午1:44写道:

@WilliamZhaoz https://github.com/WilliamZhaoz apologies for the delayed response. You should be able to use ImageClassifier or TextTranslator as an example for using Loader to load a model from file and run it.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pytorch/glow/issues/3570?email_source=notifications&email_token=AIWSN2JQBNC4ENOUZRKRVMLQO5HHRA5CNFSM4I3RILKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBNLBDI#issuecomment-542814349, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIWSN2NSR26FA7JX5UG7MW3QO5HHRANCNFSM4I3RILKA .

jackm321 commented 4 years ago

I'm not sure exactly what you mean.

What I meant was that probably you can make a binary similar to ImageClassifier or TextTranslator for your task and probably even can reuse Loader.cpp to create the glow graph for you.

ponnamsairam commented 4 years ago

thanks Jack, I have a onnx model, converted from pytorch model, but my task is not a image classification task, in such a situation, how can I perform my model, since I find only a image classification interface in glow. Jack Montgomery notifications@github.com 于2019年10月1日周二 下午5:58写道: Hi @WilliamZhaoz https://github.com/WilliamZhaoz, for onnx models you should be able to use the Loader or any of the example binaries that use the Loader (see this doc https://github.com/pytorch/glow/blob/master/docs/Quantization.md#how-to-perform-nn-conversion ) For PyTorch models, this isn't currently available in torch_glow, but we have plans to add this feature. If you are interested in implementing it please go for it otherwise I will probably be able to do it next week. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#3570?email_source=notifications&email_token=AIWSN2O4US55V5VRFOKUXO3QMMNLJA5CNFSM4I3RILKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAAWRRI#issuecomment-536963269>, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2MAM725LPUGCTWTFT3QMMNLJANCNFSM4I3RILKA .

@WilliamZhaoz You need to generate bundles using model-compiler using that .onnx model after that you can run the bundle using main.cpp API for reference please go through AOT.md in docs

3939 --go through this link.