-
I see different behaviors in NumPy and JAX.
In NumPy,
```py
>>> import numpy as np
>>> a=np.ones((4,4), dtype=ml_dtypes.bfloat16)
>>> a@a
array([[4., 4., 4., 4.],
[4., 4., 4., 4.],
…
-
The floating-point introspection function `issubnormal(x)` is not implemented for `x::BFloat16`. Since BFloat16 has the same exponent range as Float32, I suggest the following implementation:
```ju…
-
### 1.Environment
* WSL2 Ubuntu 22.04
* python3.11
* cuda12.1
* torch 2.3.0 cu121
* torchvision 0.18.0 cu121
### 2. Error Report
I download the bear dataset,and it dir like
```txt
-
-…
-
Hi, thanks for sharing your very efficient quantization method!
I was trying it out on a custom flux model and was surprised to see the saved model was the same size as the original bfloat16. I sus…
-
expected scalar type Half but found BFloat16
![2024-10-31 04_46_52-_Unsaved Workflow - ComfyUI](https://github.com/user-attachments/assets/b01e4f73-0506-459f-a156-ff4f8432c891)
-
Reproduction:
```
diff --git a/forge/test/mlir/test_ops.py b/forge/test/mlir/test_ops.py
index 37e7a8e..8b80cfa 100644
--- a/forge/test/mlir/test_ops.py
+++ b/forge/test/mlir/test_ops.py
@@ -542…
-
**Describe the bug**
When using an rwkv config ( to avoid running into the issue from #1305 )
I get the issue:
```
Traceback (most recent call last):
File "/home/hatef.4/neox/gpt-neox/train.p…
-
Anyway to use the model with bfloat16 data type?
-
when I used this command to run the bear dataset used the image that you processed:
ns-train gaussctrl --load-checkpoint unedited_models/bear/splatfacto/2024-11-04_182528/nerfstudio_models/step-0…
-
### Question
Is doing
```
model = HookedTransformer.from_pretrained_no_processing(
model_name="google/gemma-2-2b-it",
device=device,
dtype=torch.bfloat16,
default_padding_side…