Hi,
I have a onnx model that I am trying to serve with sleepsonthefloor/graphpipe-onnx:cpu docker image.
I am running it with this command:
docker run -it -v "$PWD/models:/models/" -p 9000:9000 sleepsonthefloor/graphpipe-onnx:cpu --value-inputs=../models/model_value_inputs.json --model=../models/model.onnx --listen=0.0.0.0:9000
But I am always getting this error:
E1122 08:15:37.071728 1 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E1122 08:15:37.072140 1 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full
speed of your CPU.
E1122 08:15:37.072152 1 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
terminate called after throwing an instance of 'caffe2::EnforceNotMet'
what(): [enforce fail at backend.cc:1230] . Don't know how to convert LSTM without enough extra preconverted string
*** Aborted at 1542874537 (unix time) try "date -d @1542874537" if you are using GNU date ***
PC: @ 0x7fbe43a0d428 gsignal
*** SIGABRT (@0x1) received by PID 1 (TID 0x7fbe459eab40) from PID 1; stack trace: ***
@ 0x7fbe4434b390 (unknown)
@ 0x7fbe43a0d428 gsignal
@ 0x7fbe43a0f02a abort
@ 0x7fbe4404784d __gnu_cxx::__verbose_terminate_handler()
@ 0x7fbe440456b6 (unknown)
@ 0x7fbe44045701 std::terminate()
@ 0x7fbe44045919 __cxa_throw
@ 0x7fbe44bd6ba2 _ZZN6caffe24onnx13Caffe2Backend12OnnxToCaffe2EPNS_6NetDefES3_RKN7onnx_c210ModelProtoERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEibRKSt6vectorINS0_9Caffe2OpsESaISH_EEENUlS7_S3_E_clES7_S3_.constprop.684
@ 0x7fbe44bd79eb caffe2::onnx::Caffe2Backend::OnnxToCaffe2()
@ 0x7fbe44bd81c8 caffe2::onnx::Caffe2Backend::Prepare()
@ 0x738b4a c2_engine_initialize_onnx
@ 0x733a8f _cgo_e12a854003a1_Cfunc_c2_engine_initialize_onnx
@ 0x45f340 runtime.asmcgocall
Hi, I have a onnx model that I am trying to serve with sleepsonthefloor/graphpipe-onnx:cpu docker image. I am running it with this command:
docker run -it -v "$PWD/models:/models/" -p 9000:9000 sleepsonthefloor/graphpipe-onnx:cpu --value-inputs=../models/model_value_inputs.json --model=../models/model.onnx --listen=0.0.0.0:9000
But I am always getting this error:
How could I resolve it?