-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_copysign_cuda_float16&suite=TestInductorOpInfoC…
-
Can we add [half](https://half.sourceforge.net/) as a dependency to support half precision floating point number?
I've done a bit of simple test, it seems work fine.
-
### Feature description
I would like to be able to store and load float16 values in datasets. Many dataset formats support this, and many modern compilers support this as well. I do need to store flo…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_lerp_cuda_float16&suite=TestInductorOpInfoCUDA&…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_isreal_cpu_float16&suite=TestInductorOpInfoCPU&…
-
I got the following error. Is there a way to fix it? My machine runs whisper without problem. So I think whisperx should also be adapted to machines without fp16.
```
Traceback (most recent call l…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_permute_copy_cuda_float16&suite=TestInductorOpI…
-
Hello,
I tried to run a fast tuning of GEMM with float16:
```python
from bitblas.base.roller.policy import TensorCorePolicy, DefaultPolicy
from bitblas.base.arch import CUDA
from bitblas.base.uti…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_masked_softmax_cuda_float16&suite=TestInductorOpIn…
-
I have managed to get a model converted using the conversion script that I modified:
```py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_fun…