huawei-noah / bolt

Bolt is a deep learning library with high performance and heterogeneous flexibility.
https://huawei-noah.github.io/bolt/
MIT License
918 stars 160 forks source link

如何在mac m1平台上编译? #98

Closed lixcli closed 2 years ago

lixcli commented 2 years ago

你好。 我按照gitbook上面的教程把所有第三方库安装到third_party下,运行./install.sh --target=macos-x86_64进行安装,然后遇到了protoc不能运行的问题。这个我通过直接用brew安装了m1版本的protoc,然后修改third_party/install.sh里的PROTOC_ROOTProtobuf_ROOT路径解决了。但是遇到另外一个解决不了jsoncpp不能链接的问题,目前解决不了。如下:

Hi. I installed all third-party libraries under third_party according to the tutorial above on gitbook, and ran ./install.sh --target=macos-x86_64 to install. But the "protoc" could not run. I solved this by installing the m1 version of protoc directly with brew, and then modifying the PROTOC_ROOT and Protobuf_ROOT paths in third_party/install.sh. But there is another problem that cannot be solved by jsoncpp. the errors information is shown as follows:

ld: warning: dylib (/opt/homebrew/opt/protobuf/lib/libprotobuf.dylib) was built for newer macOS version (12.0) than being linked (11.3)
Undefined symbols for architecture arm64:
  "Json::Value::Value(Json::ValueType)", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
...
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [model_tools/src/libmodel_tools.dylib] Error 1
make[1]: *** [model_tools/src/CMakeFiles/model_tools.dir/all] Error 2
make: *** [all] Error 2

请问目前有办法在mac m1上编译bolt吗? Is there any way to compile bolt on mac m1 at present?

yuxianzhi commented 2 years ago

如果不需要转模型模块可以关闭,这样不会有任何第三方依赖。 ./install.sh --target=macos-x86_64 jsoncpp是为tensorflow直接转换器准备的(tensorflow模型可以通过保存onnx,再通过onnx转换器转换成bolt),用的比较少,编译失败可以通过编辑bolt/install.sh,关闭USE_TENSORFLOW

lixcli commented 2 years ago

谢谢回复 我已经关闭了tensorflow的编译了。我想使用bolt的量化来加速推理,所以需要编译model_quantization,但是这个文件也会依赖jsoncpp。

------------------ 原始邮件 ------------------ 发件人: Xianzhi Yu @.> 发送时间: 2022年2月28日 10:13 收件人: huawei-noah/bolt @.> 抄送: lixc @.>, Author @.> 主题: 回复:[huawei-noah/bolt] 如何在mac m1平台上编译? (Issue #98)

如果不需要转模型模块可以关闭,这样不会有任何第三方依赖。 ./install.sh --target=macos-x86_64 jsoncpp是为tensorflow直接转换器准备的(tensorflow模型可以通过保存onnx,再通过onnx转换器转换成bolt),用的比较少,编译失败可以通过编辑bolt/install.sh,关闭USE_TENSORFLOW

— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you authored the thread.Message ID: @.***>

yuxianzhi commented 2 years ago
  1. 检查third_party/macos-x86_64/jsoncpp/lib下有编译安装好的*.dylib
  2. cd build-macos-x86_64 && make VERBOSE=1看是否链接jsoncpp的库
lixcli commented 2 years ago
  1. 检查third_party/macos-x86_64/jsoncpp/lib下有编译安装好的*.dylib
  2. cd build-macos-x86_64 && make VERBOSE=1看是否链接jsoncpp的库

我检查了third_party/macos-x86_64/jsoncpp/lib,有编译好的libjsoncpp.1.9.4.dylib,libjsoncpp.24.dylib,libjsoncpp.dylib。运行cd build-macos-x86_64 && make VERBOSE=1的时候发现编译libmodel_tools.dylib的时候没有显示jsoncpp对应的dylib。有可能是这里有问题?

原本关闭tensorflow编译后Cmakelists.txt不会再链接jsoncpp,我当时为了添加jsoncpp,修改了这里:

if (USE_CAFFE OR USE_ONNX OR USE_FLOW) #CmakeLists.txt第9行
    find_package(Protobuf)
    find_package(jsoncpp) #添加了这一句
endif()

是因为我添加的方式不对吗?

lixcli commented 2 years ago

这是cd build-macos-x86_64 && make VERBOSE=1运行结果

[ 73%] Linking CXX shared library libmodel_tools.dylib
cd /xxx/xxx/bolt/build_macos-x86_64/model_tools/src && /opt/homebrew/Cellar/cmake/3.21.3/bin/cmake -E cmake_link_script CMakeFiles/model_tools.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/usr/bin/g++  -W -Wextra -O3 -fPIC -fstack-protector-all -Wno-unused-command-line-argument -Wno-unused-parameter -Wno-unused-result -Wno-deprecated-declarations -Wno-unused-variable -pthread -D_USE_GENERAL -D_USE_FP32 -D_USE_INT8 -D_USE_CAFFE -D_USE_ONNX -std=c++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.1.sdk -dynamiclib -Wl,-headerpad_max_install_names -o libmodel_tools.dylib -install_name @rpath/libmodel_tools.dylib CMakeFiles/model_tools.dir/model_data_type_converter.cpp.o CMakeFiles/model_tools.dir/model_quantization.cpp.o CMakeFiles/model_tools.dir/online_conversion.cpp.o  -Wl,-rpath,/xxx/xxx/bolt/build_macos-x86_64/model_tools/src/caffe -Wl,-rpath,/xxx/xxx/bolt/build_macos-x86_64/model_tools/src/onnx -Wl,-rpath,/xxx/xxx/bolt/build_macos-x86_64/common/model_spec/src -Wl,-rpath,/xxx/xxx/bolt/build_macos-x86_64/common/uni/src caffe/libmodel_tools_caffe.dylib onnx/libmodel_tools_onnx.dylib ../../common/model_spec/src/libmodel_spec.dylib ../../common/uni/src/libuni.dylib /opt/homebrew/opt/protobuf/lib/libprotobuf.dylib 

#报错信息
Undefined symbols for architecture arm64:
  "Json::Value::Value(Json::ValueType)", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::~Value()", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::operator[](char const*)", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::operator[](std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Reader::parse(std::__1::basic_istream<char, std::__1::char_traits<char> >&, Json::Value&, bool)", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Reader::Reader()", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::getMemberNames() const", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::size() const", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::asFloat() const", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::isDouble() const", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
  "Json::Value::isObject() const", referenced from:
      add_scale_from_file(ModelSpec*, char const*) in model_quantization.cpp.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [model_tools/src/libmodel_tools.dylib] Error 1
make[1]: *** [model_tools/src/CMakeFiles/model_tools.dir/all] Error 2
make: *** [all] Error 2
yuxianzhi commented 2 years ago

可以不关闭USE_TENSORFLOW,再发给我VERBOSE=1的输出吗?

lixcli commented 2 years ago

可以不关闭USE_TENSORFLOW,再发给我VERBOSE=1的输出吗?

感谢回复,我重新打开tensorflow编译,发现可以成功编译了。不过编译的时候发现一个小问题,就是会出现下面的报错:

In file included from /xxx/xxx/bolt/model_tools/src/onnx/onnx_wrapper.cpp:14:
/xxx/xxx/bolt/model_tools/src/onnx/onnx_adaptee.h:87:46: error: too many arguments to function call, expected single argument 'total_bytes_limit', have 2 arguments
        codedstr.SetTotalBytesLimit(INT_MAX, INT_MAX / 2);

我把codedstr.SetTotalBytesLimit(INT_MAX, INT_MAX / 2);改成了codedstr.SetTotalBytesLimit(INT_MAX),就可以成功编译了。修改这个地方会影响正常运行吗?

yuxianzhi commented 2 years ago

这个是不同版本protobuf带来的,需要看你的版本的protobuf API,应该不会有问题。

lixcli commented 2 years ago

这个是不同版本protobuf带来的,需要看你的版本的protobuf API,应该不会有问题。

感谢,我的问题解决了。