TF2ModelImpl is a wrapper on top of tensorflow2.x which implements DLRModel API.
It allows to load tensorflow2.x saved model and run inference. I have verified with ssd_mobilenet_v2_320x320_coco17_tpu-8 too.
root@ce5f72a26aac:/neo-ai-dlr# python test_tf2.py
CALL HOME FEATURE ENABLED
You acknowledge and agree that DLR collects the following metrics to help improve its performance.
By default, Amazon will collect and store the following information from your device:
record_type: <enum, internal record status, such as model_loaded, model_>,
arch: <string, platform architecture, eg 64bit>,
osname: <string, platform os name, eg. Linux>,
uuid: <string, one-way non-identifable hashed mac address, eg. 8fb35b79f7c7aa2f86afbcb231b1ba6e>,
dist: <string, distribution of os, eg. Ubuntu 16.04 xenial>,
machine: <string, retuns the machine type, eg. x86_64 or i386>,
model: <string, one-way non-identifable hashed model name, eg. 36f613e00f707dbe53a64b1d9625ae7d>
If you wish to opt-out of this data collection feature, please follow the steps below:
1. Disable it with through code:
from dlr.counter.phone_home import PhoneHome
PhoneHome.disable_feature()
2. Or, create a config file, ccm_config.json inside your DLR target directory path, i.e. python3.6/site-packages/dlr/counter/ccm_config.json. Then added below format content in it, {"enable_phone_home" : false}
3. Restart DLR application.
4. Validate this feature is disabled by verifying this notification is no longer displayed, or programmatically with following command:
from dlr.counter.phone_home import PhoneHome
PhoneHome.is_enabled() # false as disabled
2022-03-09 05:45:35.948019: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:35.955873: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:35.956537: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:35.957721: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-03-09 05:45:35.958554: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:35.959198: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:35.959813: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:36.586782: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:36.587450: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:36.588067: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-09 05:45:36.588649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13813 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5
WARNING:tensorflow:Skipping full serialization of Keras layer <__main__.TestModel object at 0x7ff8da343ac8>, because it is not built.
2022-03-09 05:45:37,061 INFO found TF2.x saved model, dlr will use TensorFlow runtime.
2022-03-09 05:45:37.191709: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
All tests passed!
TF2ModelImpl is a wrapper on top of tensorflow2.x which implements DLRModel API. It allows to load tensorflow2.x saved model and run inference. I have verified with
ssd_mobilenet_v2_320x320_coco17_tpu-8
too.