Open domkirke opened 2 weeks ago
https://github.com/openxla/xla/commit/cb6451b19c8618c857fc226c1b19bd7e86740a55 should have fixed this. Can you please try again?
@domkirke for visibility. Can you please check whether this is fixed?
Hello! Unfortunetaly this does not seem to be fixed, a new error appears after a git pull :
ERROR: /Users/domkirke/Code/jax-test/cpp/xla/xla/pjrt/c/BUILD:318:14: Linking xla/pjrt/c/pjrt_c_api_gpu_plugin.so failed: (Exit 1): cc_wrapper.sh failed: error executing command (from target //xla/pjrt/c:pjrt_c_api_gpu_plugin.so) external/local_config_cc/cc_wrapper.sh @bazel-out/darwin_arm64-opt/bin/xla/pjrt/c/pjrt_c_api_gpu_plugin.so-2.params
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
ld: unknown options: --version-script --no-undefined
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Error in child process '/usr/bin/xcrun'. 1
INFO: Elapsed time: 4323.307s, Critical Path: 271.60s
INFO: 21665 processes: 8481 internal, 13183 darwin-sandbox, 1 local.
FAILED: Build did NOT complete successfully
I tried compiling with the same xla_configure.bazelrc
, and to configure again ; result was the same.
This could be a problem in the xla configure.py script.
@ddunl I can see that Tensorflow has these build options for macos:
build:macos_arm64 --cpu=darwin_arm64
I don't see anything like that in the bazelrc file generated with the xla configure.py script
@domkirke Can you try adding this line to xla_configure.bazelrc
manually?
I tried, nothing new happened :/
I don't know much of bazel, but isn't the line build --build_tag_filters -no_oss,-gpu
suspicious, as I have no GPU (putting MPS aside) on the computer?
I tried removing all -gpu
flags in the xla_configure.bazelrc
file, but got a different error :
ERROR: /Users/domkirke/Code/jax-test/cpp/xla/xla/stream_executor/rocm/BUILD:1106:11: Compiling xla/stream_executor/rocm/rocm_status.cc failed: (Exit 1): wrapped_clang_pp failed: error executing command (from target //xla/stream_executor/rocm:rocm_status) external/local_config_cc/wrapped_clang_pp '-D_FORTIFY_SOURCE=1' -fstack-protector -fcolor-diagnostics -Wall -Wthread-safety -Wself-assign -fno-omit-frame-pointer -g0 -O2 -DNDEBUG ... (remaining 59 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
In file included from xla/stream_executor/rocm/rocm_status.cc:16:
./xla/stream_executor/rocm/rocm_status.h:24:10: fatal error: 'rocm/include/hip/hip_runtime.h' file not found
#include "rocm/include/hip/hip_runtime.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
-gpu means that you filter out all gpu related things. So those filters are there for a reason :)
In any case, the reason why I think there is something MacOS specific missing from the generated bazelrc file is that those linker parameters should not be used on MacOS:
https://github.com/openxla/xla/blob/0fc891390264fb85ac822f45c4106c48e1a10ffc/xla/pjrt/c/BUILD#L236
So for some reason, your build setup is not detected as MacOS. I am not really familiar with the infrastructure side, I tried to guess what could help, but someone more knowledgeable on the infrastructure side is needed.
Hello everyone !
I encounter a weird issue trying to compile OpenXLA on Mac M1 for CPU (Sonoma 14.7). By running the following command, after having
python configure.py --config=CPU
:./bazel-6.5.0-darwin-arm64 build --test_output=all --spawn_strategy=sandboxed //xla/...
The compilation fails, with the trace below. The
xla_configure.bazelrc
is :I removed the
linker_env
option because the llvm one failed, but otherwise I did not touch anything. I was maybe thinking that the error came from a wrong c++ format, but overriding the CXX_STANDARD does not change anything ; I see the -gpu flac in the bazelrc file, but it does not work when I remove it. Is it a proper issue or a misconfiguration? I run short of ideas on this one...Thank you very much!
Failing log :