NVIDIA-AI-IOT / nvidia-tao

Other
82 stars 11 forks source link

error running model inference using tao-toolkit 5.0 #14

Open statscol opened 11 months ago

statscol commented 11 months ago

Hello,

Been trying to run inference using tao-toolkit 4.0(nvcr.io/nvidia/tao/tao-toolkit:4.0.0-deploy) for both FaceDetect and FaceDetectIR but whenever I try to generate the rtr engine file I get the following error

detectnet_v2 gen_trt_engine \
                    -m $ETLT_FILE_PATH \
                    -e LOCAL_PATH_TO/nvidia-tao/tao_deploy/specs/FaceDetect/FaceDetect_trt.txt \
                    -k $MODEL_KEY \
                    --data_type $DATA_TYPE \
                    --batch_size 1 \
                    --max_batch_size 1 \
                    --engine_file $OUT_FILEPATH
##Error
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 1721, in ParseFloat
    return float(text)
ValueError: could not convert string to float: '8.0e'
google.protobuf.text_format.ParseError: 30:22 : '    translate_max_y: 8.0e': Couldn't parse float: 8.0e

The file I'm using as the experiment_spec is the same FaceDetect_trt.txt in this repo.

Does anyone know any workaround? also tried tao-toolkit 5.0 and didn't work.