-
Many LLMs are trained with bf16, if we want to use the hidden states of LLMs for retrieval, those vectors will be in bf16 dtype. It would be helpful to support bf16 in Faiss so that we can use LLMs as…
-
That is the standard for machine learning and we might as well test for it, rather than Float16 which are quite flaky
-
Command to run:
pytest tests/tt_eager/python_api_testing/unit_testing/misc/test_layernorm.py::test_layernorm_mix_precision
All the errors are float32:
```
============================================…
-
I used orpo colab example for mistral model and I am getting this error. I am using below configs
from trl import ORPOConfig, ORPOTrainer
from unsloth import is_bfloat16_supported
orpo_trainer …
-
OCaml 5.2 will have float16, we will still need bfloat16...
-
**Describe the bug**
ttnn::transpose has data mismatch with torch::transpose with bfloat16 dataformat on specific shapes:
```
Input shape (1,12,32,100) and dims for transpose (-3,-2) - Tensor mismatc…
-
No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 577, 16, 64) (torch.bfloat16)
key : shape=(1, 577, 16, 64) (torch.bfloat16)
value : shape=(1, 577, 16, 64) …
-
Hi there!
I’ve implemented bfloat in swift over [here](https://github.com/ivarflakstad/BFloat16.swift).
If you want to I am open to having it be part of numerics. If so lmk what changes would be n…
-
error as the subject
File "D:\ComfyUI-aki-v1.2\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\Comf…
-
Using Google Colab
This issue occurs with the A100 GPU, L4 GPU, T4 GPU and TPU v2-8
Everything works as normal (however slower) with the regular CPU runtime
######################################…