-
When exporting the gemma model with huggingface `transformers`,
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google…
-
### Your current environment
```text
PyTorch version: 2.3.0a0+ebedce2
Is debug build: False
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
…
-
Dear all,
On the current commit of Flint (449694569163502fc31d66ac5363724969c9e756), compiling on my laptop with an AMD Ryzen 7 (see below for the output of cat /proc/cpuinfo) yields
```
In file …
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS…
-
# Steps
Access exposed parameters via host DAW (in my case Bitwig Studio 5.2 Beta 10 Flatpak on Manjaro Linux)
# Expected behavior
Each parameter would be exposed only once.
# Bug
The paramet…
-
### 🐛 Describe the bug
The following code fails if I run it on an H100. It fails for both the CUTLASS and cuSPARSELt backends, but hits a different error in each case. It works if I run on an A100. I…
-
### What problem are you looking to solve?
[TP-Link FAQ](https://www.tp-link.com/br/support/faq/3272/) for Omanda SDN Controller install on Linux suggests Ubuntu 22.04 as the base system, with Mong…
-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: …
-
### 🐛 Describe the bug
```
import torch
assert torch.xpu.is_available(), "Intel XPU is not available"
batch_size = 4
vocab_size = 4
out = torch.randn(batch_size, vocab_size, dtype=to…
-
**Describe the bug**
Running the `compute-sanitizer` tool on code invoking the parquet reader fails, this led me to try the `PARQUET_TESTS` unit tests which fail very early. This blocks me from usi…
cryos updated
1 month ago