-
This is a followup for https://github.com/wasmerio/wasmer/pull/3430 - currently the `test_cross_compile_python_windows` and `test_wasmer_create_exe_pirita_works` don't work on windows when compiling f…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
### 🐛 Describe the bug
**The following code fails:**
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mis…
-
### 🐛 Describe the bug
torch.onnx exports double Constant 0.0 for Python's 0.0 literal in `torch.where`.
```python
import onnx
import torch
class Model(torch.nn.Module):
def forward(…
shinh updated
4 months ago
-
### Your current environment
Previous fix from https://github.com/vllm-project/vllm/pull/3913 did not seem to work. Same issue still encountered.
```text
Collecting environment information...
I…
-
### 🐛 Describe the bug
How to reproduce:
```
import torch
from torch import nn
# Simple model
simple_model = nn.Sequential(nn.Linear(10, 20), nn.BatchNorm2d(5), nn.ReLU())
simple_model.eval()…
-
### Describe the bug
I am trying to get a working version on intel dev cloud i tried both whats on intels website the docker and the repos instruction both whats on the website and the repo failed…
-
### 🐛 Describe the bug
`torch.compile` returns wrong value for conditional mask tensor operation
```py
import torch
torch.manual_seed(420)
x = torch.randn(1, 3, 2, 2)
class Model(torch.n…
-
### 🐛 Describe the bug
I'm attempting to export a quantized model to ONNX that contains a number of tensor permutations. The input (and expected output) of these permutations are quantized tensors.
…
-
### 🐛 Describe the bug
I am trying to move tensor to GPU in PyTorch 2.1.2 on Python 3.10.13 and pytorch 2.1.2 + rocm5.6 (Radeon RX 6650 XT)
Have a similar issue as: https://github.com/pytorch/pyt…