zye1996 / Mobilefacenet-TF2-coral_tpu

57 stars 13 forks source link

Cannot run inference video #3

Open HoangTienDuc opened 3 years ago

HoangTienDuc commented 3 years ago

Hello @zye1996 . Thank for your awesome work. I try to run your inference/inference_video.py, but got 2 error:

First is:

list index out of range

  File "/Storage/ducht/face/Mobilefacenet-TF2-coral_tpu/inference/FaceRecognizer.py", line 89, in __init__
    self.rec_output_index = self.interpreter.get_output_details()[1]['index']
  File "/Storage/ducht/face/Mobilefacenet-TF2-coral_tpu/inference/inference_video.py", line 87, in <module>
    face_recognizer = FaceRecognizer(REC_MODEL_PATH, tpu=args.coral_tpu)

I decide comment https://github.com/zye1996/Mobilefacenet-TF2-coral_tpu/blob/53303e902d5ae5ac1fa2aaf5de337ab9176742ce/inference/FaceRecognizer.py#L90 And change index of self.rec_output_index from 1 to 0 https://github.com/zye1996/Mobilefacenet-TF2-coral_tpu/blob/53303e902d5ae5ac1fa2aaf5de337ab9176742ce/inference/FaceRecognizer.py#L89 It work. But i got second error

Second is:

Cannot set tensor: Got value of type UINT8 but expected type FLOAT32 for input 331, name: input_1 

File "/Storage/ducht/face/Mobilefacenet-TF2-coral_tpu/inference/FaceRecognizer.py", line 106, in face_recognize
    self.interpreter.set_tensor(self.rec_input_index, aligned_norm)
  File "/Storage/ducht/face/Mobilefacenet-TF2-coral_tpu/inference/inference_video.py", line 145, in <module>
    feature, mask = face_recognizer.face_recognize(aligned, mask=True)

How to fix these error?

HoangTienDuc commented 3 years ago

i change type of aligned_norm from uint8 to float32 and it work I use inference_model_993_quant.tflite model. But why is it so slow? embedding time is suround 0.40273499488830566s My CPU is I9 9900K

HoangTienDuc commented 3 years ago

i find out that invoke() takes up so much time. Do you have any solution?

zye1996 commented 3 years ago

i find out that invoke() takes up so much time. Do you have any solution?

Hi HoangTienDuc, this is expected as the quantized model is optimized for mobile devices and it will run slow on computer CPU. If you have GPU, you can accelerate execution using TFlite GPU delegate. Otherwise you have to use keras model for CPU inference

HoangTienDuc commented 3 years ago

Hi @zye1996. Thank for your response. i try this project on jetson nano (without tpu) also. it is faster than my cpu i9 9900k. <3 But, After align face, feature always has same value. feature value is:

array([[-3.8769207 ,  1.482352  , -0.45610833,  0.34208125, -0.34208125,
        -0.5701354 , -0.22805417, -0.11402708, -0.45610833,  0.45610833,
        -0.91221666,  0.7981896 ,  1.482352  ,  0.        , -0.91221666,
        -1.8244333 ,  0.6841625 ,  0.34208125,  1.254298  ,  0.91221666,
        -1.5963792 ,  0.        ,  1.9384604 , -0.22805417, -1.8244333 ,
         1.5963792 , -2.2805417 , -0.45610833,  3.3067853 , -0.45610833,
         2.1665146 , -0.6841625 ,  0.6841625 , -1.0262437 , -0.5701354 ,
        -2.1665146 ,  0.45610833,  2.0524874 , -0.91221666, -1.5963792 ,
        -1.0262437 ,  0.11402708,  1.368325  , -0.45610833,  1.1402708 ,
        -0.11402708, -1.254298  ,  0.91221666, -0.5701354 , -2.0524874 ,
         0.91221666,  0.22805417, -0.22805417,  0.5701354 ,  0.7981896 ,
        -1.9384604 ,  0.22805417,  0.6841625 , -0.6841625 ,  1.5963792 ,
         0.45610833, -0.7981896 , -0.91221666, -1.1402708 ,  0.11402708,
        -0.5701354 , -1.1402708 ,  2.3945687 ,  0.11402708,  0.22805417,
        -2.964704  ,  0.22805417,  0.11402708,  1.482352  ,  0.5701354 ,
         0.6841625 ,  0.45610833, -0.5701354 , -0.5701354 , -0.5701354 ,
         1.5963792 ,  1.254298  , -1.482352  ,  0.45610833,  0.6841625 ,
        -1.1402708 ,  1.482352  , -1.5963792 ,  1.0262437 ,  1.5963792 ,
         1.482352  , -1.1402708 ,  1.5963792 , -0.45610833, -2.3945687 ,
        -0.5701354 ,  1.8244333 ,  3.0787313 , -0.7981896 , -1.1402708 ,
        -0.34208125, -0.22805417,  0.7981896 , -0.22805417, -0.22805417,
         2.508596  ,  0.34208125,  0.        , -2.622623  , -1.368325  ,
        -1.5963792 ,  0.22805417, -0.45610833,  0.91221666, -1.9384604 ,
        -0.6841625 ,  1.5963792 ,  0.7981896 ,  1.8244333 , -1.9384604 ,
        -1.0262437 ,  1.1402708 , -0.7981896 , -0.5701354 , -1.254298  ,
         2.508596  ,  0.7981896 , -0.5701354 ]], dtype=float32)

Code is:

        aligned = image
        aligned_norm = np.expand_dims(aligned, axis=0)

        self.interpreter.set_tensor(self.rec_input_index, aligned_norm.astype('float32'))
        self.interpreter.invoke()
        feature = get_quant_int8_output(self.interpreter, self.rec_output_index)
        # if mask:
        #     mask = get_quant_int8_output(self.interpreter, self.mask_output_index)
        #     return feature, mask
        return feature

I am sure that, same code but difference result from my pc and jetson nano where my pc run ok. And i check align face, aligned face in frame t is also difference aligned face in frame t+1

zye1996 commented 3 years ago

Hi @zye1996. Thank for your response. i try this project on jetson nano (without tpu) also. it is faster than my cpu i9 9900k. <3 But, After align face, feature always has same value. feature value is:

array([[-3.8769207 ,  1.482352  , -0.45610833,  0.34208125, -0.34208125,
        -0.5701354 , -0.22805417, -0.11402708, -0.45610833,  0.45610833,
        -0.91221666,  0.7981896 ,  1.482352  ,  0.        , -0.91221666,
        -1.8244333 ,  0.6841625 ,  0.34208125,  1.254298  ,  0.91221666,
        -1.5963792 ,  0.        ,  1.9384604 , -0.22805417, -1.8244333 ,
         1.5963792 , -2.2805417 , -0.45610833,  3.3067853 , -0.45610833,
         2.1665146 , -0.6841625 ,  0.6841625 , -1.0262437 , -0.5701354 ,
        -2.1665146 ,  0.45610833,  2.0524874 , -0.91221666, -1.5963792 ,
        -1.0262437 ,  0.11402708,  1.368325  , -0.45610833,  1.1402708 ,
        -0.11402708, -1.254298  ,  0.91221666, -0.5701354 , -2.0524874 ,
         0.91221666,  0.22805417, -0.22805417,  0.5701354 ,  0.7981896 ,
        -1.9384604 ,  0.22805417,  0.6841625 , -0.6841625 ,  1.5963792 ,
         0.45610833, -0.7981896 , -0.91221666, -1.1402708 ,  0.11402708,
        -0.5701354 , -1.1402708 ,  2.3945687 ,  0.11402708,  0.22805417,
        -2.964704  ,  0.22805417,  0.11402708,  1.482352  ,  0.5701354 ,
         0.6841625 ,  0.45610833, -0.5701354 , -0.5701354 , -0.5701354 ,
         1.5963792 ,  1.254298  , -1.482352  ,  0.45610833,  0.6841625 ,
        -1.1402708 ,  1.482352  , -1.5963792 ,  1.0262437 ,  1.5963792 ,
         1.482352  , -1.1402708 ,  1.5963792 , -0.45610833, -2.3945687 ,
        -0.5701354 ,  1.8244333 ,  3.0787313 , -0.7981896 , -1.1402708 ,
        -0.34208125, -0.22805417,  0.7981896 , -0.22805417, -0.22805417,
         2.508596  ,  0.34208125,  0.        , -2.622623  , -1.368325  ,
        -1.5963792 ,  0.22805417, -0.45610833,  0.91221666, -1.9384604 ,
        -0.6841625 ,  1.5963792 ,  0.7981896 ,  1.8244333 , -1.9384604 ,
        -1.0262437 ,  1.1402708 , -0.7981896 , -0.5701354 , -1.254298  ,
         2.508596  ,  0.7981896 , -0.5701354 ]], dtype=float32)

Code is:

        aligned = image
        aligned_norm = np.expand_dims(aligned, axis=0)

        self.interpreter.set_tensor(self.rec_input_index, aligned_norm.astype('float32'))
        self.interpreter.invoke()
        feature = get_quant_int8_output(self.interpreter, self.rec_output_index)
        # if mask:
        #     mask = get_quant_int8_output(self.interpreter, self.mask_output_index)
        #     return feature, mask
        return feature

I am sure that, same code but difference result from my pc and jetson nano where my pc run ok. And i check align face, aligned face in frame t is also difference aligned face in frame t+1

It looks like it might be a TensorFlow problem. I do not have jetson nano so I cannot reproduce that. Let me confirm it for you later when I borrow one next week

zye1996 commented 3 years ago

Hi @zye1996. Thank for your response. i try this project on jetson nano (without tpu) also. it is faster than my cpu i9 9900k. <3 But, After align face, feature always has same value. feature value is:

array([[-3.8769207 ,  1.482352  , -0.45610833,  0.34208125, -0.34208125,
        -0.5701354 , -0.22805417, -0.11402708, -0.45610833,  0.45610833,
        -0.91221666,  0.7981896 ,  1.482352  ,  0.        , -0.91221666,
        -1.8244333 ,  0.6841625 ,  0.34208125,  1.254298  ,  0.91221666,
        -1.5963792 ,  0.        ,  1.9384604 , -0.22805417, -1.8244333 ,
         1.5963792 , -2.2805417 , -0.45610833,  3.3067853 , -0.45610833,
         2.1665146 , -0.6841625 ,  0.6841625 , -1.0262437 , -0.5701354 ,
        -2.1665146 ,  0.45610833,  2.0524874 , -0.91221666, -1.5963792 ,
        -1.0262437 ,  0.11402708,  1.368325  , -0.45610833,  1.1402708 ,
        -0.11402708, -1.254298  ,  0.91221666, -0.5701354 , -2.0524874 ,
         0.91221666,  0.22805417, -0.22805417,  0.5701354 ,  0.7981896 ,
        -1.9384604 ,  0.22805417,  0.6841625 , -0.6841625 ,  1.5963792 ,
         0.45610833, -0.7981896 , -0.91221666, -1.1402708 ,  0.11402708,
        -0.5701354 , -1.1402708 ,  2.3945687 ,  0.11402708,  0.22805417,
        -2.964704  ,  0.22805417,  0.11402708,  1.482352  ,  0.5701354 ,
         0.6841625 ,  0.45610833, -0.5701354 , -0.5701354 , -0.5701354 ,
         1.5963792 ,  1.254298  , -1.482352  ,  0.45610833,  0.6841625 ,
        -1.1402708 ,  1.482352  , -1.5963792 ,  1.0262437 ,  1.5963792 ,
         1.482352  , -1.1402708 ,  1.5963792 , -0.45610833, -2.3945687 ,
        -0.5701354 ,  1.8244333 ,  3.0787313 , -0.7981896 , -1.1402708 ,
        -0.34208125, -0.22805417,  0.7981896 , -0.22805417, -0.22805417,
         2.508596  ,  0.34208125,  0.        , -2.622623  , -1.368325  ,
        -1.5963792 ,  0.22805417, -0.45610833,  0.91221666, -1.9384604 ,
        -0.6841625 ,  1.5963792 ,  0.7981896 ,  1.8244333 , -1.9384604 ,
        -1.0262437 ,  1.1402708 , -0.7981896 , -0.5701354 , -1.254298  ,
         2.508596  ,  0.7981896 , -0.5701354 ]], dtype=float32)

Code is:

        aligned = image
        aligned_norm = np.expand_dims(aligned, axis=0)

        self.interpreter.set_tensor(self.rec_input_index, aligned_norm.astype('float32'))
        self.interpreter.invoke()
        feature = get_quant_int8_output(self.interpreter, self.rec_output_index)
        # if mask:
        #     mask = get_quant_int8_output(self.interpreter, self.mask_output_index)
        #     return feature, mask
        return feature

I am sure that, same code but difference result from my pc and jetson nano where my pc run ok. And i check align face, aligned face in frame t is also difference aligned face in frame t+1

by the way please use v1 models from pre-trained models if you are using CPU

HoangTienDuc commented 3 years ago

how about jetson nano? v1 model only contain model for TPU. I cannot run it with only jetson nano.

ValueError: Failed to load delegate from libedgetpu.so.1

Cano you fix feature always has same value?

zye1996 commented 3 years ago

how about jetson nano? v1 model only contain model for TPU. I cannot run it with only jetson nano.

ValueError: Failed to load delegate from libedgetpu.so.1

Cano you fix feature always has same value?

Hi I made some modifications and please just run python inference_video.py under the inference folder. I tested the result it should be ok.

HoangTienDuc commented 3 years ago

i seem that you are adding deep sort tracking with a trick. i think it is very interesting. i am installing some package. i will test it soon. But which device have you tested? CPU or jetson nano or another one? Do you know why same code but difference result from my PC and my jetson nano? Can you add requirement.txt file? Thank you. Sorry because have too much questions

zye1996 commented 3 years ago

i seem that you are adding deep sort tracking with a trick. i think it is very interesting. i am installing some package. i will test it soon. But which device have you tested? CPU or jetson nano or another one? Do you know why same code but difference result from my PC and my jetson nano? Can you add requirement.txt file? Thank you. Sorry because have too much questions

I tested on PC and raspberry pi, and they resulted in the same outcomes. I will add requirement.txt later

HoangTienDuc commented 3 years ago

i try it on my jetson nano. difference input always has the same feature value. it is so ridiculous.

import argparse
import json
import multiprocessing
import platform
import time

import cv2
import tensorflow as tf
import tflite_runtime.interpreter as tflite

# from deep_sort.deep_sort.detection import Detection
# from deep_sort.deep_sort.nn_matching import NearestNeighborDistanceMetric
# from deep_sort.deep_sort.tracker import Tracker
from FaceRecognizer import *
from FileVideoStreamer import *
from postprocessing import *

REC_MODEL_PATH = "../pretrained_model/training_model/inference_model_993_quant.tflite"

image_path = '/home/jetson/Documents/Mobilefacenet-TF2-coral_tpu/dataset/768.jpg'

image = cv2.imread(image_path)
face_recognizer = FaceRecognizer(REC_MODEL_PATH, False, False)
feature, mask = face_recognizer.face_recognize(image)
print('feature: ', feature)
feature:  [[-3.8769207   1.482352   -0.45610833  0.34208125 -0.34208125 -0.5701354
  -0.22805417 -0.11402708 -0.45610833  0.45610833 -0.91221666  0.7981896
   1.482352    0.         -0.91221666 -1.8244333   0.6841625   0.34208125
   1.254298    0.91221666 -1.5963792   0.          1.9384604  -0.22805417
  -1.8244333   1.5963792  -2.2805417  -0.45610833  3.3067853  -0.45610833
   2.1665146  -0.6841625   0.6841625  -1.0262437  -0.5701354  -2.1665146
   0.45610833  2.0524874  -0.91221666 -1.5963792  -1.0262437   0.11402708
   1.368325   -0.45610833  1.1402708  -0.11402708 -1.254298    0.91221666
  -0.5701354  -2.0524874   0.91221666  0.22805417 -0.22805417  0.5701354
   0.7981896  -1.9384604   0.22805417  0.6841625  -0.6841625   1.5963792
   0.45610833 -0.7981896  -0.91221666 -1.1402708   0.11402708 -0.5701354
  -1.1402708   2.3945687   0.11402708  0.22805417 -2.964704    0.22805417
   0.11402708  1.482352    0.5701354   0.6841625   0.45610833 -0.5701354
  -0.5701354  -0.5701354   1.5963792   1.254298   -1.482352    0.45610833
   0.6841625  -1.1402708   1.482352   -1.5963792   1.0262437   1.5963792
   1.482352   -1.1402708   1.5963792  -0.45610833 -2.3945687  -0.5701354
   1.8244333   3.0787313  -0.7981896  -1.1402708  -0.34208125 -0.22805417
   0.7981896  -0.22805417 -0.22805417  2.508596    0.34208125  0.
  -2.622623   -1.368325   -1.5963792   0.22805417 -0.45610833  0.91221666
  -1.9384604  -0.6841625   1.5963792   0.7981896   1.8244333  -1.9384604
  -1.0262437   1.1402708  -0.7981896  -0.5701354  -1.254298    2.508596
   0.7981896  -0.5701354 ]]

Thank you for your support.

zye1996 commented 3 years ago

i try it on my jetson nano. difference input always has the same feature value. it is so ridiculous.

import argparse
import json
import multiprocessing
import platform
import time

import cv2
import tensorflow as tf
import tflite_runtime.interpreter as tflite

# from deep_sort.deep_sort.detection import Detection
# from deep_sort.deep_sort.nn_matching import NearestNeighborDistanceMetric
# from deep_sort.deep_sort.tracker import Tracker
from FaceRecognizer import *
from FileVideoStreamer import *
from postprocessing import *

REC_MODEL_PATH = "../pretrained_model/training_model/inference_model_993_quant.tflite"

image_path = '/home/jetson/Documents/Mobilefacenet-TF2-coral_tpu/dataset/768.jpg'

image = cv2.imread(image_path)
face_recognizer = FaceRecognizer(REC_MODEL_PATH, False, False)
feature, mask = face_recognizer.face_recognize(image)
print('feature: ', feature)
feature:  [[-3.8769207   1.482352   -0.45610833  0.34208125 -0.34208125 -0.5701354
  -0.22805417 -0.11402708 -0.45610833  0.45610833 -0.91221666  0.7981896
   1.482352    0.         -0.91221666 -1.8244333   0.6841625   0.34208125
   1.254298    0.91221666 -1.5963792   0.          1.9384604  -0.22805417
  -1.8244333   1.5963792  -2.2805417  -0.45610833  3.3067853  -0.45610833
   2.1665146  -0.6841625   0.6841625  -1.0262437  -0.5701354  -2.1665146
   0.45610833  2.0524874  -0.91221666 -1.5963792  -1.0262437   0.11402708
   1.368325   -0.45610833  1.1402708  -0.11402708 -1.254298    0.91221666
  -0.5701354  -2.0524874   0.91221666  0.22805417 -0.22805417  0.5701354
   0.7981896  -1.9384604   0.22805417  0.6841625  -0.6841625   1.5963792
   0.45610833 -0.7981896  -0.91221666 -1.1402708   0.11402708 -0.5701354
  -1.1402708   2.3945687   0.11402708  0.22805417 -2.964704    0.22805417
   0.11402708  1.482352    0.5701354   0.6841625   0.45610833 -0.5701354
  -0.5701354  -0.5701354   1.5963792   1.254298   -1.482352    0.45610833
   0.6841625  -1.1402708   1.482352   -1.5963792   1.0262437   1.5963792
   1.482352   -1.1402708   1.5963792  -0.45610833 -2.3945687  -0.5701354
   1.8244333   3.0787313  -0.7981896  -1.1402708  -0.34208125 -0.22805417
   0.7981896  -0.22805417 -0.22805417  2.508596    0.34208125  0.
  -2.622623   -1.368325   -1.5963792   0.22805417 -0.45610833  0.91221666
  -1.9384604  -0.6841625   1.5963792   0.7981896   1.8244333  -1.9384604
  -1.0262437   1.1402708  -0.7981896  -0.5701354  -1.254298    2.508596
   0.7981896  -0.5701354 ]]

Thank you for your support.

sorry I cannot help as I do not have jetson nano. if it works fine on the desktop then maybe it is related to Tensorflow itself

HoangTienDuc commented 3 years ago

I am asking in tf-github. I think this ridiculous come from tf-kernel your work is awesom. Glad to see your future improvement. Thank you.

HoangTienDuc commented 3 years ago

i try it on my jetson nano. difference input always has the same feature value. it is so ridiculous.

import argparse
import json
import multiprocessing
import platform
import time

import cv2
import tensorflow as tf
import tflite_runtime.interpreter as tflite

# from deep_sort.deep_sort.detection import Detection
# from deep_sort.deep_sort.nn_matching import NearestNeighborDistanceMetric
# from deep_sort.deep_sort.tracker import Tracker
from FaceRecognizer import *
from FileVideoStreamer import *
from postprocessing import *

REC_MODEL_PATH = "../pretrained_model/training_model/inference_model_993_quant.tflite"

image_path = '/home/jetson/Documents/Mobilefacenet-TF2-coral_tpu/dataset/768.jpg'

image = cv2.imread(image_path)
face_recognizer = FaceRecognizer(REC_MODEL_PATH, False, False)
feature, mask = face_recognizer.face_recognize(image)
print('feature: ', feature)
feature:  [[-3.8769207   1.482352   -0.45610833  0.34208125 -0.34208125 -0.5701354
  -0.22805417 -0.11402708 -0.45610833  0.45610833 -0.91221666  0.7981896
   1.482352    0.         -0.91221666 -1.8244333   0.6841625   0.34208125
   1.254298    0.91221666 -1.5963792   0.          1.9384604  -0.22805417
  -1.8244333   1.5963792  -2.2805417  -0.45610833  3.3067853  -0.45610833
   2.1665146  -0.6841625   0.6841625  -1.0262437  -0.5701354  -2.1665146
   0.45610833  2.0524874  -0.91221666 -1.5963792  -1.0262437   0.11402708
   1.368325   -0.45610833  1.1402708  -0.11402708 -1.254298    0.91221666
  -0.5701354  -2.0524874   0.91221666  0.22805417 -0.22805417  0.5701354
   0.7981896  -1.9384604   0.22805417  0.6841625  -0.6841625   1.5963792
   0.45610833 -0.7981896  -0.91221666 -1.1402708   0.11402708 -0.5701354
  -1.1402708   2.3945687   0.11402708  0.22805417 -2.964704    0.22805417
   0.11402708  1.482352    0.5701354   0.6841625   0.45610833 -0.5701354
  -0.5701354  -0.5701354   1.5963792   1.254298   -1.482352    0.45610833
   0.6841625  -1.1402708   1.482352   -1.5963792   1.0262437   1.5963792
   1.482352   -1.1402708   1.5963792  -0.45610833 -2.3945687  -0.5701354
   1.8244333   3.0787313  -0.7981896  -1.1402708  -0.34208125 -0.22805417
   0.7981896  -0.22805417 -0.22805417  2.508596    0.34208125  0.
  -2.622623   -1.368325   -1.5963792   0.22805417 -0.45610833  0.91221666
  -1.9384604  -0.6841625   1.5963792   0.7981896   1.8244333  -1.9384604
  -1.0262437   1.1402708  -0.7981896  -0.5701354  -1.254298    2.508596
   0.7981896  -0.5701354 ]]

Thank you for your support.

sorry I cannot help as I do not have jetson nano. if it works fine on the desktop then maybe it is related to Tensorflow itself

Hi @zye1996 . Could you share me your origin model and how to convert origin model to quantized mode? i want to check it out. my email: tienduchoangtb@gmail.com or, Can you discuss this probem in https://github.com/tensorflow/tensorflow/issues/45483 ?

zye1996 commented 3 years ago

i try it on my jetson nano. difference input always has the same feature value. it is so ridiculous.

import argparse
import json
import multiprocessing
import platform
import time

import cv2
import tensorflow as tf
import tflite_runtime.interpreter as tflite

# from deep_sort.deep_sort.detection import Detection
# from deep_sort.deep_sort.nn_matching import NearestNeighborDistanceMetric
# from deep_sort.deep_sort.tracker import Tracker
from FaceRecognizer import *
from FileVideoStreamer import *
from postprocessing import *

REC_MODEL_PATH = "../pretrained_model/training_model/inference_model_993_quant.tflite"

image_path = '/home/jetson/Documents/Mobilefacenet-TF2-coral_tpu/dataset/768.jpg'

image = cv2.imread(image_path)
face_recognizer = FaceRecognizer(REC_MODEL_PATH, False, False)
feature, mask = face_recognizer.face_recognize(image)
print('feature: ', feature)
feature:  [[-3.8769207   1.482352   -0.45610833  0.34208125 -0.34208125 -0.5701354
  -0.22805417 -0.11402708 -0.45610833  0.45610833 -0.91221666  0.7981896
   1.482352    0.         -0.91221666 -1.8244333   0.6841625   0.34208125
   1.254298    0.91221666 -1.5963792   0.          1.9384604  -0.22805417
  -1.8244333   1.5963792  -2.2805417  -0.45610833  3.3067853  -0.45610833
   2.1665146  -0.6841625   0.6841625  -1.0262437  -0.5701354  -2.1665146
   0.45610833  2.0524874  -0.91221666 -1.5963792  -1.0262437   0.11402708
   1.368325   -0.45610833  1.1402708  -0.11402708 -1.254298    0.91221666
  -0.5701354  -2.0524874   0.91221666  0.22805417 -0.22805417  0.5701354
   0.7981896  -1.9384604   0.22805417  0.6841625  -0.6841625   1.5963792
   0.45610833 -0.7981896  -0.91221666 -1.1402708   0.11402708 -0.5701354
  -1.1402708   2.3945687   0.11402708  0.22805417 -2.964704    0.22805417
   0.11402708  1.482352    0.5701354   0.6841625   0.45610833 -0.5701354
  -0.5701354  -0.5701354   1.5963792   1.254298   -1.482352    0.45610833
   0.6841625  -1.1402708   1.482352   -1.5963792   1.0262437   1.5963792
   1.482352   -1.1402708   1.5963792  -0.45610833 -2.3945687  -0.5701354
   1.8244333   3.0787313  -0.7981896  -1.1402708  -0.34208125 -0.22805417
   0.7981896  -0.22805417 -0.22805417  2.508596    0.34208125  0.
  -2.622623   -1.368325   -1.5963792   0.22805417 -0.45610833  0.91221666
  -1.9384604  -0.6841625   1.5963792   0.7981896   1.8244333  -1.9384604
  -1.0262437   1.1402708  -0.7981896  -0.5701354  -1.254298    2.508596
   0.7981896  -0.5701354 ]]

Thank you for your support.

sorry I cannot help as I do not have jetson nano. if it works fine on the desktop then maybe it is related to Tensorflow itself

Hi @zye1996 . Could you share me your origin model and how to convert origin model to quantized mode? i want to check it out. my email: tienduchoangtb@gmail.com or, Can you discuss this probem in tensorflow/tensorflow#45483 ?

Hi I put the original model in pretrained_model/training_model/inference_model.h5. You can use quantization code in utils folder for quantization.