-
Using Google Colab
This issue occurs with the A100 GPU, L4 GPU, T4 GPU and TPU v2-8
Everything works as normal (however slower) with the regular CPU runtime
######################################…
-
**Describe the bug**
HiFi2 and HiFi4 give different results for bfloat16 @ bfloat16 matmul on wormhole
**To Reproduce**
On the branch [yieldthought/12453](https://github.com/tenstorrent/tt-metal/…
-
Many LLMs are trained with bf16, if we want to use the hidden states of LLMs for retrieval, those vectors will be in bf16 dtype. It would be helpful to support bf16 in Faiss so that we can use LLMs as…
-
### Suggestion Description
It is similar to the "atomicAdd() to support half2 and bfloat162.
Could bf16.h and bfloat16.h be combined into a single file ?
### Operating System
Ubuntu
### GPU
M…
-
OCaml 5.2 will have float16, we will still need bfloat16...
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Have added bfloat16 support in knowhere, and qu…
-
maybe as a workaround
```
Base.rem(x::Real, ::Type{BFloat16}) = BFloat16(x)
```
-
Hi there!
I’ve implemented bfloat in swift over [here](https://github.com/ivarflakstad/BFloat16.swift).
If you want to I am open to having it be part of numerics. If so lmk what changes would be n…
-
Julia 1.11 introduces BFloat16 codegen support, so let's use this issue to track support for that.
Right now, it looks like we support the type, but somehow still emit conversions:
```julia-repl…
-
is there some global flag to make everything bfloat16. I recently had to go through the code an hardcode bfloat16... which seems kind of silly:
Otherwise, I was getting bfloat16 vs float errors