-
![image](https://github.com/user-attachments/assets/54e4f668-1640-47d1-8c62-0d2f14e459f7)
NansException: A tensor with NaNs was produced in Unet. This could be either because there's not enough pre…
-
As of now, usage of half precision is not straightforward. Not only for extension libraries as mentioned in issue #266 , but also for the generation of standard kernels. E.g., using `half` (from the …
-
Both installation methods result in the same error, with the material displayed in purple.
Shader error in 'Shader Graphs/Master': Couldn't open include file 'Packages/com.unity.render-pipelines.un…
X-2w updated
22 hours ago
-
This issue is mainly for tracking the upstream counterpart (https://github.com/numpy/numpy/issues/14753) for compatibility purposes. However, there are unique challenges and opportunities in CUDA that…
-
The new correctly rounded divide test for half precision, located in binary_operator_half.cpp is using an fptr for its reference function and computing the reference like this:
s[j] = HTF(p…
-
Hi Jacob,
I tried to estimate the size of my model that uses half precision without any luck.
I changed line 34 to HalfTensor, but got the following message:
RuntimeError: "unfolded3d_copy_cpu…
-
Currently my electromagnetics simulator using GPU.js doesn't work on iPad 6 and other devices that don't support single precision because unsigned does not seem accurate enough. However I looked into …
-
## 🚀 Feature
Allow `torch.cdist` to work with half precision tensors
## Motivation
The new `torch.dist` function doesn't allow half precision tensors as inputs. However, using half precision ca…
-
CUDA supports efficient computation with half-precision (16-bit) floats. This is probably enough precision for the pixel data in our problem. We might do this with CUDA's half2 type, but this would …
-
Is it possible to use EfficientNet-PyTorch with half / mixed precision somehow?
I know the mixed precision is supported by PyTorch for about a year, but what about pretrained weights?