apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators
https://tvm.apache.org/
Apache License 2.0
11.71k stars 3.46k forks source link

[Bug] Building erros for hexagon_launcher #17193

Closed chayliu-ecarx closed 1 month ago

chayliu-ecarx commented 2 months ago

When building hexagon_launcher as the guide: https://github.com/apache/tvm/tree/main/apps/hexagon_launcher,there are some errors:

tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:76: error: no member named 'invoke_result_t' in namespace 'std'
  template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
                                                                      ~~~~~^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:92: error: 'F' does not refer to a value
  template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
                                                                                           ^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:22: note: declared here
  template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
                     ^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:661:99: error: expected member name or ';' after declaration specifiers
  template <typename F, typename = std::enable_if_t<std::is_same_v<T, std::invoke_result_t<F, T>>>>
                                                                                                  ^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:784:43: error: no template named 'invoke_result_t' in namespace 'std'
  template <typename F, typename U = std::invoke_result_t<F, T>>
                                     ~~~~~^
tvm/apps/hexagon_launcher/cmake/hexagon/../../../../include/tvm/runtime/container/array.h:792:47: error: no template named 'is_same_v' in namespace 'std'; did you mean 'is_same'?

the hexagon sdk version is 4.5.

@quic-sanirudh @abhikran-quic @kparzysz-quic @sdalvi-quic

chayliu-ecarx commented 2 months ago

solved by adding cmake option: -DCMAKE_CXX_STANDARD=17 .

quic-sanirudh commented 2 months ago

solved by adding cmake option: -DCMAKE_CXX_STANDARD=17 .

Thanks for the issue. If you're interested, please feel free to send a PR to update the docs so that it's helpful to others.

chayliu-ecarx commented 2 months ago

Ok,I will add this to readme.

However, there are some other questions:

  1. Can we import other framework model such as onnx?
  2. Can we import a float32 model and using AIMET quantify encoding information?
  3. Can we use qnn as the runtime by BYOC?
  4. After import the inceptionv4 TFLITE model,and then trying to building for it as:
    mod, params = relay.frontend.from_tflite(tflite_model)
    target = tvm.target.hexagon('v66', hvx=0)
    with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, tvm.target.Target(target, host=target), params=params, mod_name="default")

    there is an error: LLVM ERROR: Do not know how to split the result of this operator!

quic-sanirudh commented 2 months ago

Ok,I will add this to readme.

However, there are some other questions:

  1. Can we import other framework model such as onnx?
  2. Can we import a float32 model and using AIMET quantify encoding information?
  3. Can we use qnn as the runtime by BYOC?
  4. After import the inceptionv4 TFLITE model,and then trying to building for it as:
mod, params = relay.frontend.from_tflite(tflite_model)
target = tvm.target.hexagon('v66', hvx=0)
with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, tvm.target.Target(target, host=target), params=params, mod_name="default")

there is an error: LLVM ERROR: Do not know how to split the result of this operator!

  1. Importing onnx models through onnx importer in relay is supported. There are some examples in hexagon contrib tests you can refer.
  2. No. AIMET quantization is not supported in TVM
  3. No, we don't support QNN through BYOC.
  4. That sounds like an error in LLVM lowering, which needs to be fixed in LLVM. Please post a separate issue with steps to reproduce and we can try to fix it.

Let me know if it's okay to close this issue as you figured out the fix.

chayliu-ecarx commented 2 months ago

Ok, I will add this to readme file.