PaddlePaddle / Anakin

High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
https://anakin.baidu.com/
Apache License 2.0
532 stars 135 forks source link

can't convert from caffe model. #521

Closed szad670401 closed 5 years ago

szad670401 commented 5 years ago

my config:

> OPTIONS:
>     Framework: CAFFE
>     SavePath: ./output
>     ResultName: face_r100
>     Config:
>         LaunchBoard: ON
>         Server:
>             ip: 0.0.0.0
>             port: 8888
>         OptimizedGraph:
>             enable: OFF
>             path: ./googlenet.paddle_inference_model.bin.saved
>     LOGGER:
>         LogToPath: ./log/
>         WithColor: ON
> 
> TARGET:
>     CAFFE:
>         # path of fluid inference model
>         Debug: NULL                            # Generally no need to modify.
>         PrototxtPath: ./model/model.prototxt        # The upper path of a fluid inference model.
>         ModelPath: ./model/model.caffmodel        # The upper path of a fluid inference model.
>         NetType:        

Traceback (most recent call last): File "converter.py", line 79, in graph = Graph(config) File "/root/Anakin/tools/external_converter_v2/parser/graph.py", line 26, in init raise NameError('ERROR: GrapProtoIO not support %s model.' % (config.framework)) NameError: ERROR: GrapProtoIO not support CAFFE model.

Jayoprell commented 5 years ago

Did you set the path of caffe proto? You must set caffe proto before you convert.