-
On import, [`ml_dtypes`](https://github.com/jax-ml/ml_dtypes) adds new entries to `np.sctypeDict` so that e.g. `np.dtype(“int4”)` returns an int4 dtype defined outside NumPy.
Since jax currently do…
-
Hello,
Is there a way to run this code on "CPU" instead of cuda. I get the following error which I changed device='cpu" in the example code.
torch._dynamo.exc.BackendCompilerFailed: backend='in…
-
LLVM is now perfectly capable of emulating 16-bit floating point math on platforms that don't have it. This was not true when our float16 emulation code was written.
This would seem like a no-brain…
-
Hi I tried to train a quantized model fitting my VRAM as I have a GTX 1070ti, but I got this error that I did not have on a friend's computer who has an RTX 2070 (so same VRAM but more recent) :
![…
-
Hi, congratulations with this nice model
I am wondering if it is possible to run the model using CPU only and if this has been tested?
-
## Motivation
Expand Pytroch C10D backend to allow dynamic loading non-built-in communication libraries, as a preparation step to integrate Intel CCL (aka MLSL) to Pytorch as another c10d backend fo…
-
Using the following code yields a no-support error. Would love to see the model supported since it's currently one of the few Finnish-language LLMs.
```
from unsloth import FastLanguageModel
impo…
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
https://pytorch.org/docs/stable/notes/amp_examples.html
Currently, `bfloat16` works well without grad scaling. But to use `fp16` and `fp8` (`fp8` - in the future, when the support for Hopper/40XX G…
-
### System Info
- `transformers` version: 4.44.2
- Platform: Windows-11-10.0.22631-SP0
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate versio…