Open j-stephan opened 2 years ago
Okay, after diving into the log files produced by setting SYCL_VXX_KEEP_CLUTTER
I was able to figure out that Vivado's own g++
executable is unable to find the correct path to its corresponding cc1plus
. It simply wants to call cc1plus
which isn't in my path. In my case it is located in /path/to/Vivado/2020.2/tps/lnx64/gcc-6.2.0/libexec/gcc/x86_64-pc-linux-gnu/6.2.0/
. Adding this to my PATH
lets the compile.sh
script proceed. It feels a bit weird though that g++
is unable to deduce this by itself. Is this the right way to work around the problem?
We usually (try to) run the tests only with the most recent available vitis version (that would be 2021.2 as of now). Can you try with it (that requires only 300 GB of disk space 😅) ?
If not, I can try to reproduce the bug and investigate it, but that will probably not be possible before in 3-4 weeks.
Okay, it appears that the Vitis tools are unhappy if I have another GCC compiler toolchain in PATH
and LD_LIBRARY_PATH
. So my workflow currently looks like this:
SYCL_VXX_KEEP_CLUTTER
.sycl-link-vxx
step.<tempdir>/07-vxxlink.cmd
.Ok we can try to reproduce these steps with our install to see if this bug still exists in recent vitis version.
Thanks! And I'll see what can be done about a more recent software stack. It also turned out that the node itself is still utilizing XRT 2019.1 which doesn't work well with more recent Vitis versions...
Try to use the most modern version of everything. :-)
We do not use the Vitis/Vivado setup because it breaks too many things, we just use the recipe from https://github.com/triSYCL/sycl/blob/sycl/unified/next/sycl/doc/GettingStartedXilinxFPGA.md#compiling-and-running-a-sycl-application Are you using a similar setup?
So far I relied on the module system of the cluster to set up everything for me. I have now created a script that bypasses the module system for the Xilinx tools which looks very similar to the environment setup in the link. The results stays the same, though; vpl
will fail in the config_hw_emu.compile
step. When diving into the details /tmp/<build dir>/vxx_link_tmp/link/vivado/vpl/prj/prj.sim/sim_1/behav_waveform/xsim/compile.sh
will fail with the following error message:
g++: error trying to exec 'cc1plus': execvp: No such file or directory
This error message goes away once I unload the gcc-11 module (which removes the entire gcc toolchain from PATH
and LD_LIBRARY_PATH
). Afterwards I can call the command from /tmp/<build dir>/07-vxxlink.cmd
and it will happily continue (until the node's ancient xclbinutil
complains about unknown parameters - I hope my IT department will resolve this soon).
IT installed Vitis 2021.2 on the cluster. The issue persists, though - if I load the gcc-11 module the Vitis tools will fail to detect their own cc1plus
. My current work-around looks like this:
$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/trinity/shared/pkg/compiler/gcc/11.2.0/lib64 $SYCL_BIN_DIR/clang++ --gcc-toolchain=/trinity/shared/pkg/compiler/gcc/11.2.0 -std=c++20 -fsycl -fsycl-targets=fpga64_hls_hw_emu single_task_vector_add.cpp
This lets the HLS flow continue until xclbinutil
is called.
I'm on the current
sycl/unified/next
branch and using Vitis 2020.2 with axilinx_u200_xdma_201830_2
target. Compiling thesingle_task_vector_add
test case works fine but it fails oncevpl
is called:Any ideas on how to proceed from here? Can this error be debugged further by passing additional flags or something?
Log file starts here