-
I am building an application with TRTorch precompiled binaries, and I am able to compile full precision and half precision graphs successfully.
I run into build errors while trying to compile the i…
-
I'm attempting to compile a variation of the DenseNet model for the Edge TPU, and I'm having similar problems to the issue: https://github.com/google-coral/edgetpu/issues/64. DenseNet seems to be sup…
-
I am trying to build a Bert model for extracting sentence representation ,
And I use mean_token_embedding.
I want get sentence length from input "input_ids" as network.add_slice() parameter.
…
-
I pretrain a model with 2 classes ,but when I run detect_image.py .
`----INFERENCE TIME----
Note: The first inference is slow because it includes loading the model into Edge TPU memory.
Traceback (…
-
I tried to use edgetpu_compiler to convert my quantized model. (with `tensorflow 2.4.0`)
I followed sample code [Retrain a classification model for Edge TPU using post-training quantization (with TF2…
-
I have an Android app with a flutter module. It works great for a year now. Yesterday I upgraded to Flutter 2, and upgraded some of the libraries in yaml. Resolved all dependecies comflicts. The app c…
-
### System overview:
Ubuntu 18.04
TF-GPU 1.15 installed from binary
### Problem:
I am trying to compile a quantized TFlite model which was converted from a frozen graph enabling pose-estimation(…
-
We want to be able to use TensorRT's PTQ using a PyTorch dataset to support INT8 execution.
-
tested on GPU GeForce RTX 2070
model: resnet18, traced with the following python script
```
import torch
import torchvision
# An instance of your model.
model = torchvision.models.resnet18()
…
-
the output is always zero in case i used my retrained model which works well in tensroflow light
for example **mobilenet_v2_1.0_224_quant.tflite** works in tesorflow lite but the same model file…