mit-han-lab / once-for-all

[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
https://ofa.mit.edu/
MIT License
1.89k stars 333 forks source link

About model converter code (Pytorch to Tensorflow-Lite) for on-device inference latency test. #35

Closed gunjupark closed 3 years ago

gunjupark commented 4 years ago

Hi

I want to use OFA - specialized model on my device. so I need to convert speicialized ofa model(pytorch) to TF-Lite Model.

I tried to convert model with onnx, but some error occured... (e.g. ValueError: Shape must be rank 2 but is rank 1 for MatMul ...)

And I want to get Latency Tables using my devices resources also.

How could I get them?? Thank you...

gunjupark commented 4 years ago

I just want to get some help for converting model or getting operation latencies on devices.

so I'm not need nice clean codes, but some information about it.

gunjupark commented 3 years ago

I solved it. Keras model implementation -> using TF Lite Converter (v2.4) -> tf lite benchmark's profile time

un-knight commented 3 years ago

I solved it. Keras model implementation -> using TF Lite Converter (v2.4) -> tf lite benchmark's profile time

Hi~ I meet the same problem. Would you share the method for converting the PyTorch checkpoint into the tflite model? really thanks.