> OPTIONS:
> Framework: CAFFE
> SavePath: ./output
> ResultName: face_r100
> Config:
> LaunchBoard: ON
> Server:
> ip: 0.0.0.0
> port: 8888
> OptimizedGraph:
> enable: OFF
> path: ./googlenet.paddle_inference_model.bin.saved
> LOGGER:
> LogToPath: ./log/
> WithColor: ON
>
> TARGET:
> CAFFE:
> # path of fluid inference model
> Debug: NULL # Generally no need to modify.
> PrototxtPath: ./model/model.prototxt # The upper path of a fluid inference model.
> ModelPath: ./model/model.caffmodel # The upper path of a fluid inference model.
> NetType:
Traceback (most recent call last):
File "converter.py", line 79, in
graph = Graph(config)
File "/root/Anakin/tools/external_converter_v2/parser/graph.py", line 26, in init
raise NameError('ERROR: GrapProtoIO not support %s model.' % (config.framework))
NameError: ERROR: GrapProtoIO not support CAFFE model.
my config: