-
## 🐛 Bug
I have noticed that in order to generate fused kernels for half precision I am required to turn the profiling executor off
```py
torch._C._jit_set_profiling_executor(False)
torch._C._j…
-
### 🐛 Describe the bug
1. It seems blip2 testing doesn't work correctly at all if model is half precision (torch.float16).
2. With bfloat16, `colossalai.shardformer.layer.FusedLayerNorm` doesn't see…
-
Hi, I wonder how you managed to fit batch size 24 of your 48 GB GPU. Did you use .half() conversion for model and training data during training?
-
I was just curious if there are any plans for half precision / fp16 support in the future. Thank you.
-
### 🚀 The feature, motivation and pitch
Noticed this odd gap in coverage when looking at optests: https://github.com/CaoE/pytorch/blob/a1394be10096b91c0b5528fccf709e6e73078832/torch/testing/_intern…
-
### What feature would you like to see?
In order to have closer parity with the existing supported integer types, it would be helpful to have access to other common floating point types such as 16 (h…
-
Loaded cached embeddings from file.
Checking if the server is listening on port 8890...
Server not ready, waiting 4 seconds...
Traceback (most recent call last):
File "D:\LivePortrait-Windows-v2…
-
I've drafted the changes necessary to enable reading of mode 12 half-precision floats by default, and optionally writing mode 12, by
1. MRCFile my_file
2. my_file.SetToFp16()
- updates my_file->…
-
## ❓ Questions and Help
As written in https://github.com/facebookresearch/maskrcnn-benchmark/issues/807#issuecomment-500112612 , DTYPE "float16" does not make training faster( and takes the same amou…
-
I found this problem some days ago. I understand this is not a serious problem, but I feel it is worth mentioning here?
### Reproducing code example:
import numpy as np
n = np.uint1…