Closed zacksiri closed 3 weeks ago
Apologies, I retried using cuda120
and everything worked out of the box. Closing this.
The main issue was I was using cuDNN 9.1 before, and cuda120
wasn't working. I then downgraded to 8.9.7 and then tried compiling from source without tryng cuda120
first.
After compiling from source didn't work, I decided to try cuda120
again.
I've been running into this issue with XLA. I'm running on ubuntu 24.04 all other ML frameworks work, pytorch, tinygrad all work fine with my current setup. The only one I can't seem to get to work is exla.
I've tried just setting the
XLA_TARGET
tocuda120
using precompiled binary and compiled from source usingcuda
+XLA_BUILD=true
.Here is the log output:
Code example I ran:
Here is the nvcc --version
nvidia-smi