-
On Windows, Visual studio 2022, no AVX, AMD Ryzen 9 5950x.
```
wuffs 0.3, decoding to WUFFS_BASE__PIXEL_FORMAT__RGB
-----------------------------------------------------
doDecodeFromBufferWithWu…
-
### What should we add?
Right now we're primarily using `ndarray`'s `dot()` method to do matmul in the accelerate crate (for example: https://github.com/Qiskit/qiskit/blob/bee2b95f6f790831ef1675b3119…
-
### Describe the issue:
with numpy 2.1.0, one gets undocumented output:
```
>>> np.set_printoptions(legacy="1.25")
```
### Reproduce the code example:
```python
import numpy as np
np.set_prin…
-
### 🐛 Describe the bug
Hi,
I'm trying to wrap and unwrap weight normalization and I get an error. Strangely it is complaining about `weight` not be present where we can see it when printing the…
-
Hello.
This is my first post, so please excuse me if there are any mistakes.
The commands used are as follows.
```python
cd reincarnating_rl
python -um reincarnating_rl.train \
--agent qd…
-
### Proposal to improve performance
Currently, vLLM allocates all available GPU memory after loading model weights, regardless of the max_model_len setting. This can lead to inefficient memory usage,…
-
### Your current environment
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC ve…
-
### 🐛 Describe the bug
The following script attempts to fuse two custom operations together into a single custom op. One of the original ops, plus the fused op have multiple outputs. The resultin…
-
xmrig-6.16.2-linux-static-x64.tar.gz
It looks like `xmrig` doesn't support [client.reconnect](https://en.bitcoin.it/wiki/Stratum_mining_protocol#client.reconnect).
This config doesn't work, the …
-
### 🐛 Describe the bug
The model scripted with `torch.jit.script` and serialized with `torch.jit.save` is put on GPU before inference but the calculations seem to be performed on CPU: the GPU memory …