I'm trying to build onnxruntime with cuda profiling enabled. I use the Dockerfile.cuda file and add in the --enable_cuda_profiling argument however the build fails. I've also tried this multiple times locally instead of using the dockerfile and it also fails. The build script is the Dockerfile.cuda from the rel-1.16.3 branch but with the --enable_cuda_profiling flag included. It fails somewhere on the build call. If I run this locally, it fails on flash_fwd_split_hdim128_fp16_sm80.cuhere
Describe the issue
I'm trying to build onnxruntime with cuda profiling enabled. I use the Dockerfile.cuda file and add in the
--enable_cuda_profiling
argument however the build fails. I've also tried this multiple times locally instead of using the dockerfile and it also fails. The build script is the Dockerfile.cuda from the rel-1.16.3 branch but with the --enable_cuda_profiling flag included. It fails somewhere on the build call. If I run this locally, it fails onflash_fwd_split_hdim128_fp16_sm80.cu
hereUrgency
No response
Target platform
Linux 5.15.0-89-generic x86_64
Build script
Error / output
Visual Studio Version
No response
GCC / Compiler Version
No response