-
-
Hi all,
I was recommended yalla by a work colleague and really like the look of it!
However, while trying to get yalla code to run, I realised that I cannot run any of the code, because my wor…
-
CUDA is a nvidia only acceleration solution, provide a mean to support acceleration on AMD GPUs
For instance, consider migrating the CUDA code using https://github.com/ROCm-Developer-Tools/HIP. It …
-
Hi,
This way is easier to track compatibilty/maturity status vs new added cuda functions..
Hipify docs do this..
It’s a easy as copying from hipify docs the “ a” and “d “ columns which indicated cu…
-
https://rocmdocs.amd.com/en/latest/Installation_Guide/List-of-ROCm-Packages-for-Ubuntu-Fedora.html
https://github.com/RadeonOpenCompute/ROCm
Trying to build julia support for AMD GPUs on ROCm :)
…
-
TL;DR: Call torch_directml.get_gpu_memory() with tile size set to GPU memory in MB and the first element of the returned list is the percentage of memory in use. [0-1.0)
I've been messing around w…
-
Since half of the developers can use an AMD card or an old NVIDIA card, if that append, you cannot use the GPU with CUDA.
**System information**
- TensorFlow version: ALL
- Are you willing to con…
-
### Problem Description
I am trying to HIPIFY the CUDA code using CUDAExtension. Here is the CUDA source https://github.com/nerfstudio-project/nerfacc/tree/master/nerfacc/cuda. Following is my step…
-
The following error message occurs when I install the apex from source on my ROCm server(CentOS 7.6).
```
python setup.py install --cpp_ext --cuda_ext
```
```
In file included from /public/home…
-
Hi,
I am investigating to extend the DistributedDataParallel to other accelerator devices than CUDA devices.
Not only to support single-process-single-device but also to support the single-process…