The motivation for this change was a desire to try more accurate object detection models, especially the ones from TensorFlow 2 Detection Model Zoo. mAP of previous SSD models was around 25-27, while mAP of newer models is 35 and more, which should eliminate false positives.
The older MobileNet / Inception SSDs are still recommended as they can inference at 24 FPS consuming reasonable low resources. But users are not limited to use only them - Watsor can run now any TensorFlow model from version 1 to 2. This is especially interesting for users having good GPU onboard as more accurate models require much more computation resources.
TensorFlow in Docker image is now fully configured to use GPU. That means it can accelerate not only models in .uff format, but also TensorFlow models in .pb and saved model formats. New TF models do not have a frozen graph, but there is a saved_model folder that needs to be copied in full to /usr/share/watsor/model of a container. Make sure you set up the permissions for files and directories after extracting the archive. The lack of read/execute by others will prevent Watsor from seeing them.
The Coral accelerator is not ready for TF2, but it also got a new model - SSD MobileDet, which works at the same speed, but achieves better detection results out-performing the prior MobileNet models by a margin. A New model is bundled in Docker images.
New Docker image for Jetson devices (Xavier, TX2, and Nano) based on L4T can be run using the NVIDIA Container Toolkit. The platform specific libraries and drivers are mounted by the NVIDIA container runtime into the container from the underlying Jetson device.
GPU's code got rid of the plugin as it is now included in TensorRT 7. This simplifies the setup and maintenance. UFF models previously associated with that plugin have been recompiled and need to be downloaded and replaced. Otherwise, the following error is thrown:
The motivation for this change was a desire to try more accurate object detection models, especially the ones from TensorFlow 2 Detection Model Zoo.
mAP
of previous SSD models was around 25-27, whilemAP
of newer models is 35 and more, which should eliminate false positives.The older MobileNet / Inception SSDs are still recommended as they can inference at 24 FPS consuming reasonable low resources. But users are not limited to use only them - Watsor can run now any TensorFlow model from version 1 to 2. This is especially interesting for users having good GPU onboard as more accurate models require much more computation resources.
TensorFlow in Docker image is now fully configured to use GPU. That means it can accelerate not only models in
.uff
format, but also TensorFlow models in.pb
and saved model formats. New TF models do not have a frozen graph, but there is asaved_model
folder that needs to be copied in full to/usr/share/watsor/model
of a container. Make sure you set up the permissions for files and directories after extracting the archive. The lack ofread/execute by others
will prevent Watsor from seeing them.The Coral accelerator is not ready for TF2, but it also got a new model - SSD MobileDet, which works at the same speed, but achieves better detection results out-performing the prior MobileNet models by a margin. A New model is bundled in Docker images.
New Docker image for Jetson devices (Xavier, TX2, and Nano) based on L4T can be run using the NVIDIA Container Toolkit. The platform specific libraries and drivers are mounted by the NVIDIA container runtime into the container from the underlying Jetson device.
GPU's code got rid of the plugin as it is now included in TensorRT 7. This simplifies the setup and maintenance. UFF models previously associated with that plugin have been recompiled and need to be downloaded and replaced. Otherwise, the following error is thrown:
The code has been upgraded to use the most recent dependencies and libraries. Deprecated Edge TPU Python API has been replaced with PyCoral API.