Open liuyang6055 opened 7 months ago
If you’re using Ubuntu, just download the ROCM version of pytorch.
ROCm version because?? Is it faster than ZLUDA? There have been numerous compatibility issues with pytorch-rocm so that if we could use the standard version, we might be able to overcome those compat issues.
We only have AMD's hardware environment, but we would like to use CUDA and NCCL. So we want to know if zluda can achieve this effect. We don't want to use ROCM
PyTorch-rocm is the same as PyTorch cuda on the surface. The ZLUDA integration of PyTorch doesn’t work with everything because Cudnn doesn’t work with PyTorch. On Feb 25, 2024, at 19:14, liuyang6055 @.***> wrote: We only have AMD's hardware environment, but we would like to use CUDA and NCCL. So we want to know if zluda can achieve this effect. We don't want to use ROCM
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
ZLUDA uses ROCm, so it's not like you can avoid ROCm entirely with ZLUDA. Back to original question, PyTorch uses ZLUDA just like very other application: ZLUDA poses as a (special) CUDA implementation. Makes no difference for the app
ZLUDA uses ROCm, so it's not like you can avoid ROCm entirely with ZLUDA. Back to original question, PyTorch uses ZLUDA just like very other application: ZLUDA poses as a (special) CUDA implementation. Makes no difference for the app
Would ZLUDA allow for the use of something like xformers to speed up some ai programs?
hello, << export TORCH_CUDA_ARCH_LIST="6.1+PTX" export CUDAARCHS=61 export CMAKE_CUDA_ARCHITECTURES=61 export USE_SYSTEM_NCCL=1 export NCCL_ROOT_DIR=/usr export NCCL_INCLUDE_DIR=/usr/include export NCCL_LIB_DIR=/usr/lib/x86_64-linux-gnu export USE_EXPERIMENTAL_CUDNN_V8_API=OFF
The model I am using is fairseq.