NVIDIA-ISAAC-ROS / isaac_ros_object_detection

NVIDIA-accelerated, deep learned model support for image space object detection
https://developer.nvidia.com/isaac-ros-gems
Apache License 2.0
117 stars 27 forks source link

Can't visualize output at step 8-9 of quickstart guide #23

Closed marco-monforte closed 1 year ago

marco-monforte commented 1 year ago

Hi!

I'm going through the steps of the quickstart guide but I'm stuck at steps 8-9. Basically, I don't see anything in the rqt_image_view and the output is the following:

[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-05-16-12-05-37-570212-romolo-3913
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [ros2-1]: process started with pid [3924]
[INFO] [component_container_mt-2]: process started with pid [3926]
[INFO] [isaac_ros_detectnet_visualizer.py-3]: process started with pid [3928]
[INFO] [rqt_image_view-4]: process started with pid [3930]
[ros2-1] stdin is not a terminal device. Keyboard handling disabled.[INFO] [1684231538.076278976] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1684231538.076344491] [rosbag2_player]: Set rate to 1
[ros2-1] [INFO] [1684231538.083847767] [rosbag2_player]: Adding keyboard callbacks.
[ros2-1] [INFO] [1684231538.083907681] [rosbag2_player]: Press SPACE for Pause/Resume
[ros2-1] [INFO] [1684231538.083928386] [rosbag2_player]: Press CURSOR_RIGHT for Play Next Message
[ros2-1] [INFO] [1684231538.083942719] [rosbag2_player]: Press CURSOR_UP for Increase Rate 10%
[ros2-1] [INFO] [1684231538.083956176] [rosbag2_player]: Press CURSOR_DOWN for Decrease Rate 10%
[ros2-1] [INFO] [1684231538.084578303] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1684231543.537188336] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1684231548.992522895] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
.
.
.

image

What could be the problem?

Thanks

TeamRoboTo commented 1 year ago

same problem here it gave me this error in the terminal, after running command 8 of the quickstart guide image

jaiveersinghNV commented 1 year ago

@TeamRoboTo , please make sure that you've completed step 7 of the quickstart. Importantly, the config.pbtxt file needs to be provided in the specified location. The script in scripts/setup_model.sh sets up the config file for you.

@marco-monforte , could you please provide the full logs from step 6 and onwards? It looks like the rqt image viewer is running and the rosbag is playing, but I don't see any logs from the actual isaac_ros_detectnet node.

marco-monforte commented 1 year ago

Hi @jaiveersinghNV , here's the full log from step #6 to #8. Thanks!

STEP #6

user@laptopt-name:/workspaces/isaac_ros-dev$ colcon test --executor sequential
Starting >>> isaac_ros_common
Finished <<< isaac_ros_common [0.97s]           
Starting >>> isaac_ros_test
Finished <<< isaac_ros_test [14.7s]             
Starting >>> nvblox
Finished <<< nvblox [37.4s]             
Starting >>> nvblox_cpu_gpu_tools
Finished <<< nvblox_cpu_gpu_tools [0.75s]            
Starting >>> nvblox_examples_bringup
Finished <<< nvblox_examples_bringup [0.04s]
Starting >>> nvblox_isaac_sim
Finished <<< nvblox_isaac_sim [0.80s]            
Starting >>> nvblox_msgs
Finished <<< nvblox_msgs [0.04s]            
Starting >>> nvblox_performance_measurement_msgs
Finished <<< nvblox_performance_measurement_msgs [0.04s]
Starting >>> nvblox_ros_common
Finished <<< nvblox_ros_common [0.04s]            
Starting >>> realsense2_camera_msgs
Finished <<< realsense2_camera_msgs [5.12s]                
Starting >>> isaac_ros_apriltag_interfaces
Finished <<< isaac_ros_apriltag_interfaces [0.78s]                 
Starting >>> isaac_ros_bi3d_interfaces
Finished <<< isaac_ros_bi3d_interfaces [0.72s]                 
Starting >>> isaac_ros_gxf
Finished <<< isaac_ros_gxf [0.91s]                 
Starting >>> isaac_ros_nitros_interfaces
Finished <<< isaac_ros_nitros_interfaces [0.61s]                 
Starting >>> isaac_ros_pointcloud_interfaces
Finished <<< isaac_ros_pointcloud_interfaces [0.76s]                 
Starting >>> isaac_ros_tensor_list_interfaces
Finished <<< isaac_ros_tensor_list_interfaces [1.25s]                 
Starting >>> isaac_ros_visual_slam_interfaces
Finished <<< isaac_ros_visual_slam_interfaces [1.31s]                 
Starting >>> nvblox_image_padding
Finished <<< nvblox_image_padding [0.05s]
Starting >>> nvblox_nav2
Finished <<< nvblox_nav2 [1.99s]                 
Starting >>> nvblox_ros
Finished <<< nvblox_ros [15.8s]                   
Starting >>> nvblox_rviz_plugin
Finished <<< nvblox_rviz_plugin [0.04s]
Starting >>> realsense2_camera
Finished <<< realsense2_camera [0.04s]                  
Starting >>> realsense2_description
Finished <<< realsense2_description [0.04s]
Starting >>> realsense_splitter
Finished <<< realsense_splitter [1.83s]                  
Starting >>> semantic_label_conversion
Finished <<< semantic_label_conversion [0.04s]                  
Starting >>> isaac_ros_dnn_inference_test
Finished <<< isaac_ros_dnn_inference_test [2.44s]                  
Starting >>> isaac_ros_nitros
Finished <<< isaac_ros_nitros [6.35s]                  
Starting >>> isaac_ros_nvblox
Finished <<< isaac_ros_nvblox [0.95s]                  
Starting >>> nvblox_performance_measurement
Finished <<< nvblox_performance_measurement [0.05s]
Starting >>> isaac_ros_nitros_april_tag_detection_array_type
Finished <<< isaac_ros_nitros_april_tag_detection_array_type [6.55s]                  
Starting >>> isaac_ros_nitros_camera_info_type
Finished <<< isaac_ros_nitros_camera_info_type [5.58s]                  
Starting >>> isaac_ros_nitros_compressed_image_type
Finished <<< isaac_ros_nitros_compressed_image_type [5.75s]                  
Starting >>> isaac_ros_nitros_detection2_d_array_type
Finished <<< isaac_ros_nitros_detection2_d_array_type [5.67s]                  
Starting >>> isaac_ros_nitros_disparity_image_type
Finished <<< isaac_ros_nitros_disparity_image_type [6.61s]                  
Starting >>> isaac_ros_nitros_flat_scan_type
Finished <<< isaac_ros_nitros_flat_scan_type [5.67s]                  
Starting >>> isaac_ros_nitros_image_type
Finished <<< isaac_ros_nitros_image_type [5.52s]                  
Starting >>> isaac_ros_nitros_imu_type
Finished <<< isaac_ros_nitros_imu_type [5.43s]                  
Starting >>> isaac_ros_nitros_occupancy_grid_type
Finished <<< isaac_ros_nitros_occupancy_grid_type [6.42s]                  
Starting >>> isaac_ros_nitros_point_cloud_type
Finished <<< isaac_ros_nitros_point_cloud_type [5.46s]                  
Starting >>> isaac_ros_nitros_pose_array_type
Finished <<< isaac_ros_nitros_pose_array_type [5.45s]                  
Starting >>> isaac_ros_nitros_pose_cov_stamped_type
Finished <<< isaac_ros_nitros_pose_cov_stamped_type [5.39s]                  
Starting >>> isaac_ros_nitros_std_msg_type
Finished <<< isaac_ros_nitros_std_msg_type [5.45s]                  
Starting >>> isaac_ros_nitros_tensor_list_type
Finished <<< isaac_ros_nitros_tensor_list_type [7.56s]                  
Starting >>> isaac_ros_visual_slam
Finished <<< isaac_ros_visual_slam [26.8s]                   
Starting >>> isaac_ros_image_proc
--- stderr: isaac_ros_image_proc                            
Errors while running CTest
Output from these tests are in: /workspaces/isaac_ros-dev/build/isaac_ros_image_proc/Testing/Temporary/LastTest.log
Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
---
Finished <<< isaac_ros_image_proc [48.0s]   [ with test failures ]
Starting >>> isaac_ros_stereo_image_proc
Finished <<< isaac_ros_stereo_image_proc [22.7s]                   
Starting >>> isaac_ros_tensor_rt
Finished <<< isaac_ros_tensor_rt [1min 6s]                     
Starting >>> isaac_ros_triton
Finished <<< isaac_ros_triton [1min 20s]                     
Starting >>> isaac_ros_argus_camera
Finished <<< isaac_ros_argus_camera [2.44s]                 
Starting >>> isaac_ros_dnn_encoders
Finished <<< isaac_ros_dnn_encoders [12.9s]                   
Starting >>> isaac_ros_ess
Finished <<< isaac_ros_ess [6.23s]                  
Starting >>> isaac_ros_image_pipeline
Finished <<< isaac_ros_image_pipeline [1.00s]                  
Starting >>> isaac_ros_centerpose
Finished <<< isaac_ros_centerpose [14.2s]                   
Starting >>> isaac_ros_detectnet
Finished <<< isaac_ros_detectnet [1min 2s]                     
Starting >>> isaac_ros_dope
Finished <<< isaac_ros_dope [15.2s]                   
Starting >>> isaac_ros_unet
Finished <<< isaac_ros_unet [48.4s]                   

Summary: 56 packages finished [9min 44s]
  1 package had stderr output: isaac_ros_image_proc
  1 package had test failures: isaac_ros_image_proc

STEP #7

user@laptop-name:/workspaces/isaac_ros-dev$ cd /workspaces/isaac_ros-dev/src/isaac_ros_object_detection/isaac_ros_detectnet && \
>   ./scripts/setup_model.sh --height 632 --width 1200 --config-file resources/quickstart_config.pbtxt

***************************
using parameters:
MODEL_LINK : https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/zip
HEIGHT : 632
WIDTH : 1200
CONFIG_FILE_PATH : resources/quickstart_config.pbtxt
***************************

Creating Directory : /tmp/models/detectnet/1
Downloading .etlt file from https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/zip
From https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/zip
--2023-06-01 10:57:14--  https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.5/zip
Resolving api.ngc.nvidia.com (api.ngc.nvidia.com)... 13.57.164.51, 52.53.74.97
Connecting to api.ngc.nvidia.com (api.ngc.nvidia.com)|13.57.164.51|:443... connected.
HTTP request sent, awaiting response... 302 
Location: https://prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com/org/nvidia/team/tao/models/peoplenet/versions/deployable_quantized_v2.5/files.zip?response-content-disposition=attachment%3B%20filename%3D%22files.zip%22&response-content-type=application%2Fzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230601T085717Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=AKIA3PSNVSIZUODK3WZL%2F20230601%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=487f316e3ea54d75187db3092a78da66b4f3b0ab19e42ee83578c6cc6d67ef8d [following]
--2023-06-01 10:57:17--  https://prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com/org/nvidia/team/tao/models/peoplenet/versions/deployable_quantized_v2.5/files.zip?response-content-disposition=attachment%3B%20filename%3D%22files.zip%22&response-content-type=application%2Fzip&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230601T085717Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=AKIA3PSNVSIZUODK3WZL%2F20230601%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=487f316e3ea54d75187db3092a78da66b4f3b0ab19e42ee83578c6cc6d67ef8d
Resolving prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com (prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com)... 3.5.79.125, 52.218.153.1, 3.5.79.16, ...
Connecting to prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com (prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com)|3.5.79.125|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 89182857 (85M) [application/zip]
Saving to: ‘model.zip’

model.zip                    100%[=============================================>]  85.05M  2.58MB/s    in 81s     

2023-06-01 10:58:40 (1.04 MB/s) - ‘model.zip’ saved [89182857/89182857]

Unziping network model file .etlt
Archive:  model.zip
  inflating: labels.txt              
  inflating: resnet34_peoplenet_int8.etlt  
  inflating: resnet34_peoplenet_int8.txt  
Checking if labels.txt exists
Labels file received with model.
Converting .etlt to a TensorRT Engine Plan
[INFO] [MemUsageChange] Init CUDA: CPU +565, GPU +0, now: CPU 577, GPU 319 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +517, GPU +116, now: CPU 1146, GPU 435 (MiB)
[WARNING] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[WARNING] The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1286, GPU +360, now: CPU 2605, GPU 795 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +246, GPU +60, now: CPU 2851, GPU 855 (MiB)
[INFO] Timing cache disabled. Turning it on will improve builder speed.
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Total Host Persistent Memory: 10336
[INFO] Total Device Persistent Memory: 0
[INFO] Total Scratch Memory: 0
[INFO] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 0 MiB, GPU 741 MiB
[INFO] [BlockAssignment] Algorithm ShiftNTopDown took 19.7399ms to assign 3 blocks to 248 nodes requiring 873676800 bytes.
[INFO] Total Activation Memory: 873676800
[INFO] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 3632, GPU 1185 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 3631, GPU 1169 (MiB)
[INFO] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +833, now: CPU 0, GPU 918 (MiB)
[WARNING] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[INFO] Starting Calibration with batch size 8.
[INFO]   Post Processing Calibration data in 8.64e-07 seconds.
[INFO] Calibration completed in 1.85394 seconds.
[INFO] Writing Calibration Cache for calibrator: TRT-8500-EntropyCalibration2
[WARNING] Missing scale and zero-point for tensor output_bbox/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor input_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_1/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_2/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1c_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_3/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_4/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_5/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2c_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_6/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2d_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_7/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_8/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_9/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3c_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_10/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3d_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_11/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3e_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_12/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3f_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_13/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_14/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_15/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_2/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4c_bn_shortcut/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor add_16/add, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/Sigmoid, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +10, now: CPU 3709, GPU 1083 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 3709, GPU 1091 (MiB)
[INFO] Local timing cache in use. Profiling results in this builder pass will not be stored.
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Total Host Persistent Memory: 142912
[INFO] Total Device Persistent Memory: 0
[INFO] Total Scratch Memory: 0
[INFO] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 110 MiB, GPU 1491 MiB
[INFO] [BlockAssignment] Algorithm ShiftNTopDown took 1.33545ms to assign 4 blocks to 67 nodes requiring 607027200 bytes.
[INFO] Total Activation Memory: 607027200
[WARNING] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[WARNING] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[WARNING] Check verbose logs for the list of affected weights.
[WARNING] - 52 weights are affected by this issue: Detected subnormal FP16 values.
[WARNING] - 16 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
[INFO] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +21, GPU +22, now: CPU 21, GPU 22 (MiB)
Copying .pbtxt config file to /tmp/models/detectnet
Completed quickstart setup

STEP #8

user@laptop-name:/workspaces/isaac_ros-dev/src/isaac_ros_object_detection/isaac_ros_detectnet$ cd /workspaces/isaac_ros-dev && \
>   ros2 launch isaac_ros_detectnet isaac_ros_detectnet_quickstart.launch.py
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-06-01-11-04-24-070189-romolo-26346
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [ros2-1]: process started with pid [26357]
[INFO] [component_container_mt-2]: process started with pid [26359]
[INFO] [isaac_ros_detectnet_visualizer.py-3]: process started with pid [26361]
[INFO] [rqt_image_view-4]: process started with pid [26363]
[ros2-1] stdin is not a terminal device. Keyboard handling disabled.[INFO] [1685610264.377731317] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1685610264.377762163] [rosbag2_player]: Set rate to 1
[ros2-1] [INFO] [1685610264.380138854] [rosbag2_player]: Adding keyboard callbacks.
[ros2-1] [INFO] [1685610264.380154772] [rosbag2_player]: Press SPACE for Pause/Resume
[ros2-1] [INFO] [1685610264.380158992] [rosbag2_player]: Press CURSOR_RIGHT for Play Next Message
[ros2-1] [INFO] [1685610264.380162551] [rosbag2_player]: Press CURSOR_UP for Increase Rate 10%
[ros2-1] [INFO] [1685610264.380165705] [rosbag2_player]: Press CURSOR_DOWN for Decrease Rate 10%
[ros2-1] [INFO] [1685610264.380363498] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[component_container_mt-2] [INFO] [1685610264.439638785] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/lib/libdnn_image_encoder_node.so
[component_container_mt-2] [INFO] [1685610264.449889507] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container_mt-2] [INFO] [1685610264.449922824] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container_mt-2] [INFO] [1685610264.451533587] [NitrosContext]: [NitrosContext] Creating a new shared context
[component_container_mt-2] [INFO] [1685610264.451600669] [dnn_image_encoder]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1685610264.451895427] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container_mt-2] [INFO] [1685610264.453667602] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_gxf_helpers.so
[component_container_mt-2] [INFO] [1685610264.454975764] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_sight.so
[component_container_mt-2] [INFO] [1685610264.456359773] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_atlas.so
[component_container_mt-2] [INFO] [1685610264.458310373] [NitrosContext]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_nitros/share/isaac_ros_nitros/config/type_adapter_nitros_context_graph.yaml'
[component_container_mt-2] [INFO] [1685610264.458626422] [NitrosContext]: [NitrosContext] Initializing application...
[component_container_mt-2] [INFO] [1685610264.459547557] [NitrosContext]: [NitrosContext] Running application...
[component_container_mt-2] 2023-06-01 11:04:24.459 WARN  gxf/std/program.cpp@456: No system specified. Nothing to do
[component_container_mt-2] [INFO] [1685610264.459943976] [dnn_image_encoder]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1685610264.459956143] [dnn_image_encoder]: [NitrosNode] Loading built-in preset extension specs
[component_container_mt-2] [INFO] [1685610264.461127963] [dnn_image_encoder]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1685610264.461138539] [dnn_image_encoder]: [NitrosNode] Loading preset extension specs
[component_container_mt-2] [INFO] [1685610264.462235955] [dnn_image_encoder]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1685610264.462245518] [dnn_image_encoder]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1685610264.462402626] [dnn_image_encoder]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1685610264.462514677] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/libgxf_message_compositor.so
[component_container_mt-2] [INFO] [1685610264.462862841] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container_mt-2] [INFO] [1685610264.463959280] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/serialization/libgxf_serialization.so
[component_container_mt-2] [INFO] [1685610264.465379475] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/image_proc/libgxf_tensorops.so
[component_container_mt-2] [INFO] [1685610264.466596288] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/multimedia/libgxf_multimedia.so
[component_container_mt-2] [INFO] [1685610264.466798647] [dnn_image_encoder]: [NitrosNode] Loading graph to the optimizer
[component_container_mt-2] [INFO] [1685610264.469567386] [dnn_image_encoder]: [NitrosNode] Running optimization
[component_container_mt-2] [INFO] [1685610264.552306594] [dnn_image_encoder]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container_mt-2] [INFO] [1685610264.554014911] [dnn_image_encoder]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container_mt-2] [INFO] [1685610264.558960767] [dnn_image_encoder]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/dnn_image_encoder' in container '/detectnet_container/detectnet_container'
[component_container_mt-2] [INFO] [1685610264.560501899] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_triton/lib/libtriton_node.so
[component_container_mt-2] [INFO] [1685610264.561389315] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::TritonNode>
[component_container_mt-2] [INFO] [1685610264.561400929] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::TritonNode>
[component_container_mt-2] [INFO] [1685610264.562930874] [triton_node]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1685610264.563279777] [triton_node]: [TritonNode] Set input data format to: "nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1685610264.563286664] [triton_node]: [TritonNode] Set output data format to: "nitros_tensor_list_nhwc_rgb_f32"
[component_container_mt-2] [INFO] [1685610264.563335834] [triton_node]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1685610264.563339700] [triton_node]: [NitrosNode] Loading built-in preset extension specs
[component_container_mt-2] [INFO] [1685610264.564503987] [triton_node]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1685610264.564537390] [triton_node]: [NitrosNode] Loading preset extension specs
[component_container_mt-2] [INFO] [1685610264.565498201] [triton_node]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1685610264.565506154] [triton_node]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1685610264.565614990] [triton_node]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1685610264.566065405] [triton_node]: [NitrosContext] Loading extension: gxf/triton/libgxf_triton_ext.so
[ros2-1] [INFO] [1685610269.835369336] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1685610275.287991704] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
.
.
[latest message gets printed continuously]
jaiveersinghNV commented 1 year ago

The logs all look normal to us, so the issue might lie somewhere else.

Could you please confirm what platform you're running this graph on? Which model of Jetson, or which discrete graphics card running on x86?

Additionally, could you please use rqt's graph visualization tool to generate a picture of the network of nodes and topics? It's possible that there's a disconnect between some nodes that's preventing any messages from reaching the output topic.

marco-monforte commented 1 year ago

Sure, I'm on a Schenker Vision 16 Pro with 12th Gen Intel® Core™ i7-12700H × 20 and NVIDIA GeForce RTX 3070 Ti, running Ubuntu 20.04.6 LTS with kernel 5.15.0-73-generic.

I'm running the graph from the provided docker and, as additional information, the vSLAM and Nvblox graphs are running fine.

I've also been trying to run rqt_graph, but apparently, I've some mismatch between focal and humble for which I cannot install it. In the meantime, I tried with ros2 doctor and this is the output:

user@laptop-name:/workspaces/isaac_ros-dev$ ros2 doctor 
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_amcl has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: dwb_plugins has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_smac_planner has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_behavior_tree has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_constrained_smoother has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_smoother has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_velocity_smoother has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_waypoint_follower has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_collision_monitor has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_bringup has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_planner has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_controller has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_rviz_plugins has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: behaviortree_cpp_v3 has been updated to a new version. local: 3.7.0 < latest: 3.8.3
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: costmap_queue has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_simple_commander has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_theta_star_planner has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav_2d_msgs has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_map_server has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_costmap_2d has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_rotation_shim_controller has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_behaviors has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_voxel_grid has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: dwb_core has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_core has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav_2d_utils has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_regulated_pure_pursuit_controller has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: dwb_msgs has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_dwb_controller has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_msgs has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: navigation2 has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_util has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: dwb_critics has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_navfn_planner has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_bt_navigator has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_common has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 112: UserWarning: nav2_lifecycle_manager has been updated to a new version. local: 1.1.2 < latest: 1.1.6
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/package.py: 124: UserWarning: Cannot find the latest versions of packages:  negotiated_examples karto_sdk negotiated vda5050_msgs negotiated_interfaces
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 45: UserWarning: Subscriber without publisher detected on /detectnet/detections.
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /events/read_split.
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 45: UserWarning: Subscriber without publisher detected on /image/nitros.
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /image/nitros/_supported_types.
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /tensor_pub.
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /tensor_pub/nitros.
/opt/ros/humble/install/lib/python3.8/site-packages/ros2doctor/api/topic.py: 45: UserWarning: Subscriber without publisher detected on /tensor_pub/nitros/_supported_types.

Additional note: at some point, while trying again, I saw the output but it was wrong. Unfortunately, I haven't been able to reproduce it.

marco-monforte commented 1 year ago

Thank you for solving the issue!

Maybe it can be useful somehow, so let me just add that the first time I tried to re-run the script, at step 8 I got the following log and the application did not work:

ros2 launch isaac_ros_detectnet isaac_ros_detectnet_quickstart.launch.py
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-08-17-10-49-09-406208-romolo-8993
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [ros2-1]: process started with pid [9004]
[INFO] [component_container_mt-2]: process started with pid [9006]
[INFO] [isaac_ros_detectnet_visualizer.py-3]: process started with pid [9008]
[INFO] [rqt_image_view-4]: process started with pid [9010]
[component_container_mt-2] [INFO] [1692262149.883958657] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/lib/libdnn_image_encoder_node.so
[component_container_mt-2] [INFO] [1692262149.971491909] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container_mt-2] [INFO] [1692262149.971557891] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container_mt-2] [INFO] [1692262149.975409923] [NitrosContext]: [NitrosContext] Creating a new shared context
[component_container_mt-2] [INFO] [1692262149.975566607] [dnn_image_encoder]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1692262149.976350772] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container_mt-2] [INFO] [1692262149.982800828] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_gxf_helpers.so
[component_container_mt-2] [INFO] [1692262149.987896274] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_sight.so
[component_container_mt-2] [INFO] [1692262149.994009032] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_atlas.so
[component_container_mt-2] [INFO] [1692262150.006718149] [NitrosContext]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_nitros/share/isaac_ros_nitros/config/type_adapter_nitros_context_graph.yaml'
[ros2-1] stdin is not a terminal device. Keyboard handling disabled.[INFO] [1692262150.008032217] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262150.008480528] [rosbag2_player]: Set rate to 1
[component_container_mt-2] [INFO] [1692262150.008855650] [NitrosContext]: [NitrosContext] Initializing application...
[component_container_mt-2] [INFO] [1692262150.013205313] [NitrosContext]: [NitrosContext] Running application...
[component_container_mt-2] 2023-08-17 10:49:10.013 WARN  gxf/std/program.cpp@456: No system specified. Nothing to do
[component_container_mt-2] [INFO] [1692262150.015941286] [dnn_image_encoder]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1692262150.016003968] [dnn_image_encoder]: [NitrosNode] Loading built-in preset extension specs
[ros2-1] [INFO] [1692262150.021072226] [rosbag2_player]: Adding keyboard callbacks.
[ros2-1] [INFO] [1692262150.021120861] [rosbag2_player]: Press SPACE for Pause/Resume
[ros2-1] [INFO] [1692262150.021134283] [rosbag2_player]: Press CURSOR_RIGHT for Play Next Message
[ros2-1] [INFO] [1692262150.021144899] [rosbag2_player]: Press CURSOR_UP for Increase Rate 10%
[ros2-1] [INFO] [1692262150.021154133] [rosbag2_player]: Press CURSOR_DOWN for Decrease Rate 10%
[component_container_mt-2] [INFO] [1692262150.021134802] [dnn_image_encoder]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1692262150.021197783] [dnn_image_encoder]: [NitrosNode] Loading preset extension specs
[ros2-1] [INFO] [1692262150.021711571] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[component_container_mt-2] [INFO] [1692262150.024234416] [dnn_image_encoder]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1692262150.024251951] [dnn_image_encoder]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1692262150.024607977] [dnn_image_encoder]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1692262150.024945119] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/libgxf_message_compositor.so
[component_container_mt-2] [INFO] [1692262150.025883374] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container_mt-2] [INFO] [1692262150.028190145] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/serialization/libgxf_serialization.so
[component_container_mt-2] [INFO] [1692262150.034779889] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/image_proc/libgxf_tensorops.so
[component_container_mt-2] [INFO] [1692262150.042878041] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/multimedia/libgxf_multimedia.so
[component_container_mt-2] [INFO] [1692262150.043643696] [dnn_image_encoder]: [NitrosNode] Loading graph to the optimizer
[component_container_mt-2] [INFO] [1692262150.051209373] [dnn_image_encoder]: [NitrosNode] Running optimization
[component_container_mt-2] [INFO] [1692262150.211912462] [dnn_image_encoder]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container_mt-2] [INFO] [1692262150.214626605] [dnn_image_encoder]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container_mt-2] [INFO] [1692262150.224347745] [dnn_image_encoder]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/dnn_image_encoder' in container '/detectnet_container/detectnet_container'
[component_container_mt-2] [INFO] [1692262150.229537137] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_triton/lib/libtriton_node.so
[component_container_mt-2] [INFO] [1692262150.232814412] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::TritonNode>
[component_container_mt-2] [INFO] [1692262150.232841999] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::TritonNode>
[component_container_mt-2] [INFO] [1692262150.240110139] [triton_node]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1692262150.241213883] [triton_node]: [TritonNode] Set input data format to: "nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1692262150.241237003] [triton_node]: [TritonNode] Set output data format to: "nitros_tensor_list_nhwc_rgb_f32"
[component_container_mt-2] [INFO] [1692262150.241367533] [triton_node]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1692262150.241376585] [triton_node]: [NitrosNode] Loading built-in preset extension specs
[component_container_mt-2] [INFO] [1692262150.245054612] [triton_node]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1692262150.245084800] [triton_node]: [NitrosNode] Loading preset extension specs
[component_container_mt-2] [INFO] [1692262150.248255104] [triton_node]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1692262150.248273584] [triton_node]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1692262150.248684536] [triton_node]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1692262150.250181720] [triton_node]: [NitrosContext] Loading extension: gxf/triton/libgxf_triton_ext.so
[ros2-1] [INFO] [1692262155.477152770] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262160.932043372] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262166.388255096] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262171.843696121] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262177.298915976] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
.
.
[repeated log message]
.
.
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[WARNING] [launch_ros.actions.load_composable_nodes]: Abandoning wait for the '/detectnet_container/detectnet_container/_container/load_node' service response, due to shutdown.
[component_container_mt-2] [INFO] [1692262461.357592237] [rclcpp]: signal_handler(signum=2)
[ERROR] [isaac_ros_detectnet_visualizer.py-3]: process has died [pid 9008, exit code -2, cmd '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/isaac_ros_detectnet/isaac_ros_detectnet_visualizer.py --ros-args -r __node:=detectnet_visualizer'].
Future exception was never retrieved
future: <Future finished exception=InvalidHandle('cannot use Destroyable because destruction was requested')>
Traceback (most recent call last):
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/opt/ros/humble/install/lib/python3.8/site-packages/launch_ros/actions/load_composable_nodes.py", line 197, in _load_in_sequence
    self._load_node(next_load_node_request, context)
  File "/opt/ros/humble/install/lib/python3.8/site-packages/launch_ros/actions/load_composable_nodes.py", line 119, in _load_node
    while not self.__rclpy_load_node_client.wait_for_service(timeout_sec=1.0):
  File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/client.py", line 180, in wait_for_service
    return self.service_is_ready()
  File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/client.py", line 159, in service_is_ready
    with self.handle:
rclpy._rclpy_pybind11.InvalidHandle: cannot use Destroyable because destruction was requested
[rqt_image_view-4] [INFO] [1692262461.357576208] [rclcpp]: signal_handler(signum=2)
[ros2-1] [INFO] [1692262461.357664588] [rclcpp]: signal_handler(signum=2)
[isaac_ros_detectnet_visualizer.py-3] Traceback (most recent call last):
[isaac_ros_detectnet_visualizer.py-3]   File "/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/isaac_ros_detectnet/isaac_ros_detectnet_visualizer.py", line 87, in <module>
[isaac_ros_detectnet_visualizer.py-3]     main()
[isaac_ros_detectnet_visualizer.py-3]   File "/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/isaac_ros_detectnet/isaac_ros_detectnet_visualizer.py", line 82, in main
[isaac_ros_detectnet_visualizer.py-3]     rclpy.spin(DetectNetVisualizer())
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/__init__.py", line 222, in spin
[isaac_ros_detectnet_visualizer.py-3]     executor.spin_once()
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/executors.py", line 705, in spin_once
[isaac_ros_detectnet_visualizer.py-3]     handler, entity, node = self.wait_for_ready_callbacks(timeout_sec=timeout_sec)
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/executors.py", line 691, in wait_for_ready_callbacks
[isaac_ros_detectnet_visualizer.py-3]     return next(self._cb_iter)
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/executors.py", line 588, in _wait_for_ready_callbacks
[isaac_ros_detectnet_visualizer.py-3]     wait_set.wait(timeout_nsec)
[isaac_ros_detectnet_visualizer.py-3] KeyboardInterrupt
[rqt_image_view-4] QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-admin'
[INFO] [ros2-1]: process has finished cleanly [pid 9004]

The second time it did (and all the consecutive times I tried), with the following log:

ros2 launch isaac_ros_detectnet isaac_ros_detectnet_quickstart.launch.py
[INFO] [launch]: All log files can be found below /home/admin/.ros/log/2023-08-17-10-54-35-559251-romolo-9147
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [ros2-1]: process started with pid [9158]
[INFO] [component_container_mt-2]: process started with pid [9160]
[INFO] [isaac_ros_detectnet_visualizer.py-3]: process started with pid [9162]
[INFO] [rqt_image_view-4]: process started with pid [9164]
[component_container_mt-2] [INFO] [1692262475.975283428] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/lib/libdnn_image_encoder_node.so
[component_container_mt-2] [INFO] [1692262476.010905546] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container_mt-2] [INFO] [1692262476.010997515] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::DnnImageEncoderNode>
[component_container_mt-2] [INFO] [1692262476.015643447] [NitrosContext]: [NitrosContext] Creating a new shared context
[component_container_mt-2] [INFO] [1692262476.015835739] [dnn_image_encoder]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1692262476.016742291] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/std/libgxf_std.so
[component_container_mt-2] [INFO] [1692262476.022510949] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_gxf_helpers.so
[component_container_mt-2] [INFO] [1692262476.026090402] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_sight.so
[component_container_mt-2] [INFO] [1692262476.030530335] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/libgxf_atlas.so
[component_container_mt-2] [INFO] [1692262476.037825025] [NitrosContext]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_nitros/share/isaac_ros_nitros/config/type_adapter_nitros_context_graph.yaml'
[component_container_mt-2] [INFO] [1692262476.038711142] [NitrosContext]: [NitrosContext] Initializing application...
[component_container_mt-2] [INFO] [1692262476.040836161] [NitrosContext]: [NitrosContext] Running application...
[component_container_mt-2] 2023-08-17 10:54:36.040 WARN  gxf/std/program.cpp@456: No system specified. Nothing to do
[component_container_mt-2] [INFO] [1692262476.041998707] [dnn_image_encoder]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1692262476.042043544] [dnn_image_encoder]: [NitrosNode] Loading built-in preset extension specs
[component_container_mt-2] [INFO] [1692262476.045833278] [dnn_image_encoder]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1692262476.045857813] [dnn_image_encoder]: [NitrosNode] Loading preset extension specs
[component_container_mt-2] [INFO] [1692262476.049216336] [dnn_image_encoder]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1692262476.049242352] [dnn_image_encoder]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1692262476.049873709] [dnn_image_encoder]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1692262476.050474262] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/libgxf_message_compositor.so
[component_container_mt-2] [INFO] [1692262476.051850146] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/cuda/libgxf_cuda.so
[component_container_mt-2] [INFO] [1692262476.055106797] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/serialization/libgxf_serialization.so
[component_container_mt-2] [INFO] [1692262476.058182176] [dnn_image_encoder]: [NitrosContext] Loading extension: gxf/lib/image_proc/libgxf_tensorops.so
[component_container_mt-2] [INFO] [1692262476.061497029] [NitrosContext]: [NitrosContext] Loading extension: gxf/lib/multimedia/libgxf_multimedia.so
[component_container_mt-2] [INFO] [1692262476.062493052] [dnn_image_encoder]: [NitrosNode] Loading graph to the optimizer
[component_container_mt-2] [INFO] [1692262476.069964061] [dnn_image_encoder]: [NitrosNode] Running optimization
[ros2-1] stdin is not a terminal device. Keyboard handling disabled.[INFO] [1692262476.140019607] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262476.140091308] [rosbag2_player]: Set rate to 1
[ros2-1] [INFO] [1692262476.150321758] [rosbag2_player]: Adding keyboard callbacks.
[ros2-1] [INFO] [1692262476.150417801] [rosbag2_player]: Press SPACE for Pause/Resume
[ros2-1] [INFO] [1692262476.150437379] [rosbag2_player]: Press CURSOR_RIGHT for Play Next Message
[ros2-1] [INFO] [1692262476.150452866] [rosbag2_player]: Press CURSOR_UP for Increase Rate 10%
[ros2-1] [INFO] [1692262476.150467575] [rosbag2_player]: Press CURSOR_DOWN for Decrease Rate 10%
[ros2-1] [INFO] [1692262476.151334958] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[component_container_mt-2] [INFO] [1692262476.301787688] [dnn_image_encoder]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container_mt-2] [INFO] [1692262476.304506420] [dnn_image_encoder]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container_mt-2] [INFO] [1692262476.318985825] [dnn_image_encoder]: [NitrosNode] Starting negotiation...
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/dnn_image_encoder' in container '/detectnet_container/detectnet_container'
[component_container_mt-2] [INFO] [1692262476.324321286] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_triton/lib/libtriton_node.so
[component_container_mt-2] [INFO] [1692262476.326589066] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::TritonNode>
[component_container_mt-2] [INFO] [1692262476.326611176] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::dnn_inference::TritonNode>
[component_container_mt-2] [INFO] [1692262476.331408570] [triton_node]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1692262476.332268946] [triton_node]: [TritonNode] Set input data format to: "nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1692262476.332288263] [triton_node]: [TritonNode] Set output data format to: "nitros_tensor_list_nhwc_rgb_f32"
[component_container_mt-2] [INFO] [1692262476.332407766] [triton_node]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1692262476.332417992] [triton_node]: [NitrosNode] Loading built-in preset extension specs
[component_container_mt-2] [INFO] [1692262476.334613259] [triton_node]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1692262476.334624395] [triton_node]: [NitrosNode] Loading preset extension specs
[component_container_mt-2] [INFO] [1692262476.336102538] [triton_node]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1692262476.336112049] [triton_node]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1692262476.336247546] [triton_node]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1692262476.337149541] [triton_node]: [NitrosContext] Loading extension: gxf/triton/libgxf_triton_ext.so
[component_container_mt-2] [INFO] [1692262476.398971048] [triton_node]: [NitrosNode] Loading graph to the optimizer
[component_container_mt-2] [INFO] [1692262476.402236814] [triton_node]: [NitrosNode] Running optimization
[component_container_mt-2] [INFO] [1692262476.503165101] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262476.506386670] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262476.522819028] [triton_node]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container_mt-2] [INFO] [1692262476.534747489] [triton_node]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container_mt-2] [INFO] [1692262476.535067257] [triton_node]: [NitrosPublisherSubscriberGroup] Pinning the component "triton_request/input" (type="nvidia::gxf::DoubleBufferReceiver") to use its compatible format only: "nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1692262476.537601112] [triton_node]: [NitrosPublisherSubscriberGroup] Pinning the component "sink/sink" (type="nvidia::isaac_ros::MessageRelay") to use its compatible format only: "nitros_tensor_list_nhwc_rgb_f32"
[component_container_mt-2] [INFO] [1692262476.538285360] [triton_node]: [NitrosNode] Starting negotiation...
[component_container_mt-2] [INFO] [1692262476.538607892] [dnn_image_encoder]: Negotiating
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/triton_node' in container '/detectnet_container/detectnet_container'
[component_container_mt-2] [INFO] [1692262476.541763474] [detectnet_container.detectnet_container]: Load Library: /workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/libdetectnet_decoder_node.so
[component_container_mt-2] [INFO] [1692262476.546748967] [detectnet_container.detectnet_container]: Found class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::detectnet::DetectNetDecoderNode>
[component_container_mt-2] [INFO] [1692262476.546781821] [detectnet_container.detectnet_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<nvidia::isaac_ros::detectnet::DetectNetDecoderNode>
[component_container_mt-2] [INFO] [1692262476.549692364] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262476.553676445] [detectnet_decoder_node]: [NitrosNode] Initializing NitrosNode
[component_container_mt-2] [INFO] [1692262476.555565035] [detectnet_decoder_node]: [NitrosNode] Starting NitrosNode
[component_container_mt-2] [INFO] [1692262476.555600698] [detectnet_decoder_node]: [NitrosNode] Loading built-in preset extension specs
[component_container_mt-2] [INFO] [1692262476.558969971] [detectnet_decoder_node]: [NitrosNode] Loading built-in extension specs
[component_container_mt-2] [INFO] [1692262476.558995452] [detectnet_decoder_node]: [NitrosNode] Loading preset extension specs
[component_container_mt-2] [INFO] [1692262476.559514923] [detectnet_decoder_node]: [NitrosNode] Loading extension specs
[component_container_mt-2] [INFO] [1692262476.559532314] [detectnet_decoder_node]: [NitrosNode] Loading generator rules
[component_container_mt-2] [INFO] [1692262476.559543220] [detectnet_decoder_node]: [NitrosNode] Loading extensions
[component_container_mt-2] [INFO] [1692262476.561118317] [detectnet_decoder_node]: [NitrosContext] Loading extension: gxf/lib/detectnet/libgxf_detectnet.so
[component_container_mt-2] [INFO] [1692262476.563274676] [detectnet_decoder_node]: [NitrosNode] Loading graph to the optimizer
[component_container_mt-2] [INFO] [1692262476.564756845] [detectnet_decoder_node]: [NitrosNode] Running optimization
[component_container_mt-2] [INFO] [1692262476.567466648] [detectnet_decoder_node]: [NitrosNode] Obtaining graph IO group info from the optimizer
[component_container_mt-2] [INFO] [1692262476.567718384] [detectnet_decoder_node]: [NitrosNode] Creating negotiated publishers/subscribers
[component_container_mt-2] [INFO] [1692262476.572112627] [detectnet_decoder_node]: [NitrosNode] Starting negotiation...
[component_container_mt-2] [INFO] [1692262476.572569848] [triton_node]: Negotiating
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/detectnet_decoder_node' in container '/detectnet_container/detectnet_container'
[component_container_mt-2] [INFO] [1692262476.572681271] [triton_node]: Could not negotiate
[component_container_mt-2] [INFO] [1692262476.654975169] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262476.755671387] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262476.855524683] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262476.957133262] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262477.061397507] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262477.159554740] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262477.260423359] [dnn_image_encoder]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262477.319775755] [dnn_image_encoder]: [NitrosNode] Starting post negotiation setup
[component_container_mt-2] [INFO] [1692262477.319904223] [dnn_image_encoder]: [NitrosNode] Getting data format negotiation results
[component_container_mt-2] [INFO] [1692262477.319934697] [dnn_image_encoder]: [NitrosPublisher] Use the negotiated data format: "nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1692262477.319954922] [dnn_image_encoder]: [NitrosSubscriber] Negotiation ended with no results
[component_container_mt-2] [INFO] [1692262477.319970300] [dnn_image_encoder]: [NitrosSubscriber] Use the compatible subscriber: topic_name="/image", data_format="nitros_image_bgr8"
[component_container_mt-2] [INFO] [1692262477.320047454] [dnn_image_encoder]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container_mt-2] [INFO] [1692262477.336349730] [dnn_image_encoder]: [NitrosNode] Wrote the final top level YAML graph to "/workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/share/isaac_ros_dnn_encoders/VORQYTACYR.yaml"
[component_container_mt-2] [INFO] [1692262477.336402749] [dnn_image_encoder]: [NitrosNode] Calling user's pre-load-graph callback
[component_container_mt-2] [INFO] [1692262477.336413395] [dnn_image_encoder]: In DNN Image Encoder Node preLoadGraphCallback().
[component_container_mt-2] [INFO] [1692262477.336444181] [dnn_image_encoder]: [NitrosNode] Loading application
[component_container_mt-2] [INFO] [1692262477.336452298] [dnn_image_encoder]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_dnn_encoders/share/isaac_ros_dnn_encoders/VORQYTACYR.yaml'
[component_container_mt-2] [INFO] [1692262477.339326590] [dnn_image_encoder]: [NitrosNode] Linking Nitros pub/sub to the loaded application
[component_container_mt-2] [INFO] [1692262477.339387892] [dnn_image_encoder]: [NitrosNode] Calling user's post-load-graph callback
[component_container_mt-2] [INFO] [1692262477.339392841] [dnn_image_encoder]: In DNN Image Encoder Node postLoadGraphCallback().
[component_container_mt-2] [INFO] [1692262477.339445963] [dnn_image_encoder]: [NitrosContext] Initializing application...
[component_container_mt-2] [INFO] [1692262477.341336724] [dnn_image_encoder]: [NitrosContext] Running application...
[component_container_mt-2] [INFO] [1692262477.350761740] [dnn_image_encoder]: [NitrosNode] Starting a heartbeat timer (eid=62)
[component_container_mt-2] [INFO] [1692262477.462606676] [triton_node]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262477.471136313] [triton_node]: [NitrosSubscriber] Received a message but the application receiver's pointer is not yet set.
[component_container_mt-2] [INFO] [1692262477.538832138] [triton_node]: [NitrosNode] Starting post negotiation setup
[component_container_mt-2] [INFO] [1692262477.538949505] [triton_node]: [NitrosNode] Getting data format negotiation results
[component_container_mt-2] [INFO] [1692262477.538971153] [triton_node]: [NitrosSubscriber] Use the negotiated data format: "nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1692262477.539011406] [triton_node]: [NitrosPublisher] Negotiation ended with no results
[component_container_mt-2] [INFO] [1692262477.539027020] [triton_node]: [NitrosPublisher] Use only the compatible publisher: topic_name="/tensor_sub", data_format="nitros_tensor_list_nhwc_rgb_f32"
[component_container_mt-2] [INFO] [1692262477.539049948] [triton_node]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container_mt-2] [INFO] [1692262477.557272885] [triton_node]: [NitrosNode] Wrote the final top level YAML graph to "/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/FLAXGXDZTN.yaml"
[component_container_mt-2] [INFO] [1692262477.557339336] [triton_node]: [NitrosNode] Calling user's pre-load-graph callback
[component_container_mt-2] [INFO] [1692262477.557345888] [triton_node]: [NitrosNode] Loading application
[component_container_mt-2] [INFO] [1692262477.557353848] [triton_node]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_triton/share/isaac_ros_triton/FLAXGXDZTN.yaml'
[component_container_mt-2] 2023-08-17 10:54:37.559 WARN  gxf/std/yaml_file_loader.cpp@952: Using unregistered parameter 'dummy_rx' in component 'requester'.
[component_container_mt-2] [INFO] [1692262477.559656292] [triton_node]: [NitrosNode] Linking Nitros pub/sub to the loaded application
[component_container_mt-2] [INFO] [1692262477.559742066] [triton_node]: [NitrosNode] Calling user's post-load-graph callback
[component_container_mt-2] [INFO] [1692262477.559752949] [triton_node]: In TritonNode postLoadGraphCallback().
[component_container_mt-2] [INFO] [1692262477.559854298] [triton_node]: [NitrosContext] Initializing application...
[component_container_mt-2] [INFO] [1692262477.572550944] [detectnet_decoder_node]: [NitrosNode] Starting post negotiation setup
[component_container_mt-2] [INFO] [1692262477.572669934] [detectnet_decoder_node]: [NitrosNode] Getting data format negotiation results
[component_container_mt-2] [INFO] [1692262477.572690012] [detectnet_decoder_node]: [NitrosPublisher] Negotiation ended with no results
[component_container_mt-2] [INFO] [1692262477.572708132] [detectnet_decoder_node]: [NitrosPublisher] Use only the compatible publisher: topic_name="/detectnet/detections", data_format="nitros_detection2_d_array"
[component_container_mt-2] [INFO] [1692262477.572734479] [detectnet_decoder_node]: [NitrosSubscriber] Negotiation ended with no results
[component_container_mt-2] [INFO] [1692262477.572753545] [detectnet_decoder_node]: [NitrosSubscriber] Use the compatible subscriber: topic_name="/tensor_sub", data_format="nitros_tensor_list_nchw_rgb_f32"
[component_container_mt-2] [INFO] [1692262477.572796170] [detectnet_decoder_node]: [NitrosNode] Exporting the final graph based on the negotiation results
[component_container_mt-2] [INFO] [1692262477.576423519] [detectnet_decoder_node]: [NitrosNode] Wrote the final top level YAML graph to "/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/STSNUHJMYN.yaml"
[component_container_mt-2] [INFO] [1692262477.576516126] [detectnet_decoder_node]: [NitrosNode] Calling user's pre-load-graph callback
[component_container_mt-2] [INFO] [1692262477.576526504] [detectnet_decoder_node]: [NitrosNode] Loading application
[component_container_mt-2] WARNING: infer_trtis_server.cpp:1219 NvDsTritonServerInit suggest to set model_control_mode:none. otherwise may cause unknow issues.
[component_container_mt-2] I0817 08:54:37.634981 9160 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7fcbf8000000' with size 268435456
[component_container_mt-2] I0817 08:54:37.635295 9160 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
[component_container_mt-2] I0817 08:54:37.636892 9160 server.cc:563] 
[component_container_mt-2] +------------------+------+
[component_container_mt-2] | Repository Agent | Path |
[component_container_mt-2] +------------------+------+
[component_container_mt-2] +------------------+------+
[component_container_mt-2] 
[component_container_mt-2] I0817 08:54:37.636913 9160 server.cc:590] 
[component_container_mt-2] +---------+------+--------+
[component_container_mt-2] | Backend | Path | Config |
[component_container_mt-2] +---------+------+--------+
[component_container_mt-2] +---------+------+--------+
[component_container_mt-2] 
[component_container_mt-2] I0817 08:54:37.636925 9160 server.cc:633] 
[component_container_mt-2] +-------+---------+--------+
[component_container_mt-2] | Model | Version | Status |
[component_container_mt-2] +-------+---------+--------+
[component_container_mt-2] +-------+---------+--------+
[component_container_mt-2] 
[component_container_mt-2] I0817 08:54:37.685163 9160 metrics.cc:864] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU
[component_container_mt-2] I0817 08:54:37.685447 9160 metrics.cc:757] Collecting CPU metrics
[component_container_mt-2] I0817 08:54:37.685686 9160 tritonserver.cc:2264] 
[component_container_mt-2] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container_mt-2] | Option                           | Value                                                                                                                                                                                                |
[component_container_mt-2] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container_mt-2] | server_id                        | triton                                                                                                                                                                                               |
[component_container_mt-2] | server_version                   | 2.26.0                                                                                                                                                                                               |
[component_container_mt-2] | server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace logging |
[component_container_mt-2] | model_repository_path[0]         | /tmp/models                                                                                                                                                                                          |
[component_container_mt-2] | model_control_mode               | MODE_EXPLICIT                                                                                                                                                                                        |
[component_container_mt-2] | strict_model_config              | 1                                                                                                                                                                                                    |
[component_container_mt-2] | rate_limit                       | OFF                                                                                                                                                                                                  |
[component_container_mt-2] | pinned_memory_pool_byte_size     | 268435456                                                                                                                                                                                            |
[component_container_mt-2] | cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                                                                                             |
[component_container_mt-2] | response_cache_byte_size         | 0                                                                                                                                                                                                    |
[component_container_mt-2] | min_supported_compute_capability | 6.0                                                                                                                                                                                                  |
[component_container_mt-2] | strict_readiness                 | 1                                                                                                                                                                                                    |
[component_container_mt-2] | exit_timeout                     | 30                                                                                                                                                                                                   |
[component_container_mt-2] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[component_container_mt-2] 
[component_container_mt-2] [INFO] [1692262477.686485925] [triton_node]: [NitrosContext] Running application...
[component_container_mt-2] [INFO] [1692262477.686718962] [triton_node]: [NitrosNode] Starting a heartbeat timer (eid=185)
[component_container_mt-2] [INFO] [1692262477.686740019] [detectnet_decoder_node]: [NitrosContext] Loading application: '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/STSNUHJMYN.yaml'
[component_container_mt-2] [INFO] [1692262477.688503258] [detectnet_decoder_node]: [NitrosNode] Linking Nitros pub/sub to the loaded application
[component_container_mt-2] [INFO] [1692262477.688563701] [detectnet_decoder_node]: [NitrosNode] Calling user's post-load-graph callback
[component_container_mt-2] [INFO] [1692262477.688710677] [detectnet_decoder_node]: [NitrosContext] Initializing application...
[component_container_mt-2] [INFO] [1692262477.689267787] [detectnet_decoder_node]: [NitrosContext] Running application...
[component_container_mt-2] [INFO] [1692262477.689498145] [detectnet_decoder_node]: [NitrosNode] Starting a heartbeat timer (eid=260)
[component_container_mt-2] I0817 08:54:37.690823 9160 model_lifecycle.cc:459] loading: detectnet:1
[component_container_mt-2] I0817 08:54:37.729121 9160 tensorrt.cc:5442] TRITONBACKEND_Initialize: tensorrt
[component_container_mt-2] I0817 08:54:37.729145 9160 tensorrt.cc:5452] Triton TRITONBACKEND API version: 1.10
[component_container_mt-2] I0817 08:54:37.729285 9160 tensorrt.cc:5458] 'tensorrt' TRITONBACKEND API version: 1.10
[component_container_mt-2] I0817 08:54:37.729289 9160 tensorrt.cc:5486] backend configuration:
[component_container_mt-2] {"cmdline":{"auto-complete-config":"false","min-compute-capability":"6.000000","backend-directory":"/opt/tritonserver/backends","default-max-batch-size":"4"}}
[component_container_mt-2] I0817 08:54:37.730022 9160 tensorrt.cc:5591] TRITONBACKEND_ModelInitialize: detectnet (version 1)
[component_container_mt-2] I0817 08:54:37.732231 9160 tensorrt.cc:5640] TRITONBACKEND_ModelInstanceInitialize: detectnet (GPU device 0)
[component_container_mt-2] I0817 08:54:38.308684 9160 logging.cc:49] Loaded engine size: 22 MiB
[component_container_mt-2] I0817 08:54:38.416526 9160 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +21, now: CPU 0, GPU 21 (MiB)
[component_container_mt-2] I0817 08:54:38.453404 9160 logging.cc:49] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +579, now: CPU 0, GPU 600 (MiB)
[component_container_mt-2] W0817 08:54:38.453427 9160 logging.cc:46] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[component_container_mt-2] I0817 08:54:38.453939 9160 tensorrt.cc:1556] Created instance detectnet on GPU 0 with stream priority 0
[component_container_mt-2] I0817 08:54:38.463066 9160 model_lifecycle.cc:693] successfully loaded 'detectnet' version 1
[ros2-1] [INFO] [1692262481.604223997] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262487.057428300] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262492.511381770] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262497.964383433] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262503.417381287] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262508.870630264] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
[ros2-1] [INFO] [1692262514.323680746] [rosbag2_storage]: Opened database '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/share/isaac_ros_detectnet/detectnet_rosbag/detectnet_rosbag_0.db3' for READ_ONLY.
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[ros2-1] [INFO] [1692262515.731276551] [rclcpp]: signal_handler(signum=2)
[ERROR] [isaac_ros_detectnet_visualizer.py-3]: process has died [pid 9162, exit code -2, cmd '/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/isaac_ros_detectnet/isaac_ros_detectnet_visualizer.py --ros-args -r __node:=detectnet_visualizer'].
[INFO] [ros2-1]: process has finished cleanly [pid 9158]
[INFO] [rqt_image_view-4]: process has finished cleanly [pid 9164]
[component_container_mt-2] [INFO] [1692262515.731285778] [rclcpp]: signal_handler(signum=2)
[rqt_image_view-4] [INFO] [1692262515.731326933] [rclcpp]: signal_handler(signum=2)
[isaac_ros_detectnet_visualizer.py-3] Traceback (most recent call last):
[isaac_ros_detectnet_visualizer.py-3]   File "/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/isaac_ros_detectnet/isaac_ros_detectnet_visualizer.py", line 87, in <module>
[isaac_ros_detectnet_visualizer.py-3]     main()
[isaac_ros_detectnet_visualizer.py-3]   File "/workspaces/isaac_ros-dev/install/isaac_ros_detectnet/lib/isaac_ros_detectnet/isaac_ros_detectnet_visualizer.py", line 82, in main
[isaac_ros_detectnet_visualizer.py-3]     rclpy.spin(DetectNetVisualizer())
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/__init__.py", line 222, in spin
[isaac_ros_detectnet_visualizer.py-3]     executor.spin_once()
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/executors.py", line 705, in spin_once
[isaac_ros_detectnet_visualizer.py-3]     handler, entity, node = self.wait_for_ready_callbacks(timeout_sec=timeout_sec)
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/executors.py", line 691, in wait_for_ready_callbacks
[isaac_ros_detectnet_visualizer.py-3]     return next(self._cb_iter)
[isaac_ros_detectnet_visualizer.py-3]   File "/opt/ros/humble/install/lib/python3.8/site-packages/rclpy/executors.py", line 588, in _wait_for_ready_callbacks
[isaac_ros_detectnet_visualizer.py-3]     wait_set.wait(timeout_nsec)
[isaac_ros_detectnet_visualizer.py-3] KeyboardInterrupt
[component_container_mt-2] [INFO] [1692262515.736169600] [detectnet_decoder_node]: [NitrosNode] Terminating the running application
[component_container_mt-2] [INFO] [1692262515.736193203] [detectnet_decoder_node]: [NitrosContext] Interrupting GXF...
[component_container_mt-2] [INFO] [1692262515.736542000] [detectnet_decoder_node]: [NitrosContext] Waiting on GXF...
[component_container_mt-2] [INFO] [1692262515.736553861] [detectnet_decoder_node]: [NitrosContext] Deinitializing...
[component_container_mt-2] [INFO] [1692262515.736829407] [detectnet_decoder_node]: [NitrosContext] Destroying context
[component_container_mt-2] [INFO] [1692262515.736929649] [detectnet_decoder_node]: [NitrosNode] Application termination done
[component_container_mt-2] [INFO] [1692262515.743373692] [triton_node]: [NitrosNode] Terminating the running application
[component_container_mt-2] [INFO] [1692262515.743396091] [triton_node]: [NitrosContext] Interrupting GXF...
[component_container_mt-2] [INFO] [1692262515.743411065] [triton_node]: [NitrosContext] Waiting on GXF...
[component_container_mt-2] [INFO] [1692262515.743635520] [triton_node]: [NitrosContext] Deinitializing...
[component_container_mt-2] [INFO] [1692262515.743892068] [triton_node]: [NitrosContext] Destroying context
[component_container_mt-2] I0817 08:55:15.744057 9160 server.cc:264] Waiting for in-flight requests to complete.
[component_container_mt-2] I0817 08:55:15.744075 9160 server.cc:280] Timeout 30: Found 0 model versions that have in-flight inferences
[component_container_mt-2] I0817 08:55:15.744078 9160 server.cc:295] All models are stopped, unloading models
[component_container_mt-2] I0817 08:55:15.744082 9160 server.cc:302] Timeout 30: Found 1 live models and 0 in-flight non-inference requests
[component_container_mt-2] I0817 08:55:15.744216 9160 tensorrt.cc:5678] TRITONBACKEND_ModelInstanceFinalize: delete instance state
[component_container_mt-2] I0817 08:55:15.748361 9160 tensorrt.cc:5617] TRITONBACKEND_ModelFinalize: delete model state
[component_container_mt-2] I0817 08:55:15.752245 9160 model_lifecycle.cc:578] successfully unloaded 'detectnet' version 1
[component_container_mt-2] INFO: infer_simple_runtime.cpp:70 TrtISBackend id:185 initialized model: detectnet
[component_container_mt-2] |==================================================================================================================================================================|
[component_container_mt-2] |                                           Job Statistics Report (regular)                                                                                        |
[component_container_mt-2] |==================================================================================================================================================================|
[component_container_mt-2] | Name                                               |   Count | Time (Median - 90% - Max) [ms] | Load (%) | Exec(ms) | Variation (Median - 90% - Max) [ns]        |
[component_container_mt-2] |------------------------------------------------------------------------------------------------------------------------------------------------------------------|
[component_container_mt-2] |                                    STSNUHJMYN_sink |     382 |     0.06 |     0.10 |     0.23 |    0.1 % |     26.5 |        60470 |       135564 |       266101 |
[component_container_mt-2] |                       STSNUHJMYN_detectnet_decoder |     382 |     4.25 |    77.87 |   134.31 |   17.9 % |   6530.1 |        63151 |       136616 |    526885666 |
[component_container_mt-2] |==================================================================================================================================================================|
[component_container_mt-2] |==================================================================================================================================================================|
[component_container_mt-2] |                                           Entity Statistics Report (regular)                                                                                     |
[component_container_mt-2] |==================================================================================================================================================================|
[component_container_mt-2] | Entity Name             | Entity State             |   Count | Time (Median - 90% - Max) [ms]                                                                    |
[component_container_mt-2] |------------------------------------------------------------------------------------------------------------------------------------------------------------------|
[component_container_mt-2] |         STSNUHJMYN_sink  |              StopPending |       1 |  0.00476 |  0.00476 |  0.00476                                                                   |
[component_container_mt-2] |         STSNUHJMYN_sink  |                     Idle |     382 | 100.78924 | 102.89830 | 354.49106                                                                   |
[component_container_mt-2] |         STSNUHJMYN_sink  |                  Ticking |     382 |  0.04030 |  0.07384 |  0.20249                                                                   |
[component_container_mt-2] |         STSNUHJMYN_sink  |                  Pending |     382 |  0.01528 |  0.02779 |  0.08916                                                                   |
[component_container_mt-2] |------------------------------------------------------------------------------------------------------------------------------------------------------------------|
[component_container_mt-2] | STSNUHJMYN_detectnet_de  |              StopPending |       1 |  0.00969 |  0.00969 |  0.00969                                                                   |
[component_container_mt-2] | STSNUHJMYN_detectnet_de  |                     Idle |     382 | 96.92199 | 97.91078 | 277.91642                                                                   |
[component_container_mt-2] | STSNUHJMYN_detectnet_de  |                  Ticking |     382 |  4.20522 | 77.85439 | 134.29283                                                                   |
[component_container_mt-2] | STSNUHJMYN_detectnet_de  |                  Pending |     382 |  0.02605 |  0.03119 |  0.04960                                                                   |
[component_container_mt-2] |------------------------------------------------------------------------------------------------------------------------------------------------------------------|
[component_container_mt-2] |==================================================================================================================================================================|
[rqt_image_view-4] QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-admin'
[component_container_mt-2] I0817 08:55:16.744175 9160 server.cc:302] Timeout 29: Found 0 live models and 0 in-flight non-inference requests
[component_container_mt-2] [INFO] [1692262516.745812950] [triton_node]: [NitrosNode] Application termination done
[component_container_mt-2] [INFO] [1692262516.756099298] [dnn_image_encoder]: [NitrosNode] Terminating the running application
[component_container_mt-2] [INFO] [1692262516.756116312] [dnn_image_encoder]: [NitrosContext] Interrupting GXF...
[component_container_mt-2] [INFO] [1692262516.756128110] [dnn_image_encoder]: [NitrosContext] Waiting on GXF...
[component_container_mt-2] [INFO] [1692262516.756386421] [dnn_image_encoder]: [NitrosContext] Deinitializing...
[component_container_mt-2] [INFO] [1692262516.762070374] [dnn_image_encoder]: [NitrosContext] Destroying context
[component_container_mt-2] [INFO] [1692262516.762683724] [dnn_image_encoder]: [NitrosNode] Application termination done
[INFO] [component_container_mt-2]: process has finished cleanly [pid 9160]