NVIDIA-AI-IOT / tf_trt_models

TensorFlow models accelerated with NVIDIA TensorRT
BSD 3-Clause "New" or "Revised" License
686 stars 241 forks source link

MobilenetV2-ssd is failed! #47

Open linuxsky opened 5 years ago

linuxsky commented 5 years ago

onverted to TensorRT: Fill, Merge, Switch, Range, ConcatV2, ZerosLike, Identity, NonMaxSuppressionV3, Minimum, StridedSlice, Shape, Split, Where, Exp, ExpandDims, Unpack, GatherV2, NoOp, TopKV2, Cast, Placeholder, Mul, Pack, Reshape, ResizeBilinear, Squeeze, Add, Greater, Const, Sub, Transpose, Slice, (For more information see https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html#support-ops). 2019-03-29 13:17:01.262911: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:928] Number of TensorRT candidate segments: 1 [1] 27275 killed python3 camera_tf_trt.py --image --filename --model ssd_mobilenet_v2_coco

lbcastro commented 5 years ago

I'm having the exact same problem when running that script on the Jetson Nano.

atyshka commented 5 years ago

Are you getting out of memory? That's the problem I've been having. Does anyone know if Tensorrt is device agnostic? If so we could try the tensorrt optimization on a desktop with more memory and transfer the model to the nano.

isra60 commented 5 years ago

You should modify the default parameters on the optimize function.

AFAIK the optimization must be done in the device you are going to use for inference.

jaybdub commented 5 years ago

Hi All,

I was able to optimize on Jetson Nano with 4GB of swap enabled using the released TensorFlow 1.13

You can permanently add 4GB of swap with the following commands

sudo fallocate -l 4G /var/swapfile
sudo chmod 600 /var/swapfile
sudo mkswap /var/swapfile
sudo swapon /var/swapfile
sudo bash -c 'echo "/var/swapfile swap swap defaults 0 0" >> /etc/fstab'

Please let me know if you run into any issues.

Thanks! John