-
### What feature would you like to see?
In order to have closer parity with the existing supported integer types, it would be helpful to have access to other common floating point types such as 16 (h…
-
When training with half-precision I noticed that normalization in NTXentLoss can give ```NaN``` values.
in ```forward``` method, there is a code:
```
# normalize the output to length 1
…
-
From the CMake output:
```
-- Autodetected CUDA architecture(s): 7.5 7.5
-- Building with CUDA flags: -gencode;arch=compute_75,code=sm_75
-- Your setup does not supports half precision (it requ…
-
I’m encountering issues when trying to convert my YOLOv8x model from torchscript to torch_neuron on Kaggle. Here are the details:
1. YOLOv8x Model (Single Class):
- Trained model file: '.pt'
…
-
I am loading a model using `torch.load()`and run out of memory. I heard that I can quantize the model using bitsandbytes while loading it. It will then be smaller in memory. Do you know how to do thi…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
### What happened?
I am using Automatic…
-
While trying to use Half-precision Floating point operations for riscv-v extension instruction set architecture with spike. My assembly code is giving me an error "An illegal instruction was execu…
-
-
Hello ! There is a bug already known about some GC like my 1660 Super (7.5 capability)
It generates black images with half-precisions
Is there a way to launch with " --precision full --no-half "…
-
`torch.nn.functional.normalize` uses an epsilon of 1e-12. This value is too small for half precision and gets evaluated as zero (`torch.HalfTensor([1e-12]) == 0`). This causes nans in half precision w…
rmrao updated
4 years ago