dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.74k stars 2.97k forks source link

How to use custom .onnx or .tflite model with inference ? #1824

Open RamooIsTaken opened 5 months ago

RamooIsTaken commented 5 months ago

I have a custom model that I trained using the tensorflow API. I keep this model in both .onnx format and .tflite format. Which of these can I use with jetson.inference to increase the performance on my Jetson card?

RamooIsTaken commented 5 months ago

First of all, thank you for your response and interest. What do you mean by related files? What exactly do I need to change in which file? Thank you in advance for your answer, have a nice day.


Gönderen: maynp30 @.> Gönderildi: 10 Nisan 2024 Çarşamba 15:26 Kime: dusty-nv/jetson-inference @.> Bilgi: Remzi @.>; Author @.> Konu: Re: [dusty-nv/jetson-inference] How to use custom .onnx or .tflite model with inference ? (Issue #1824)

I think it is impossible to do without changing the code. But you can manually open your model, via changing corresponding files.

— Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/1824#issuecomment-2047408254, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A333M7DK2FMZW62PTO5JSADY4UVV3AVCNFSM6AAAAABFY7M4XWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBXGQYDQMRVGQ. You are receiving this because you authored the thread.Message ID: @.***>