-
**Describe the bug**
When using lossless float16 encoding, specific ranges of binary values appear to be corrupted. Specifically, it appears that all values with a binary representation corresponding…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_grid_sample_cuda_float16&suite=Te…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_pad_constant_cuda_float16&suite=T…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_prelu_cpu_float16&suite=TestInductor…
-
### 🐛 Describe the bug
After the XNNPACK update of https://github.com/pytorch/pytorch/pull/139913, our nightly ARM build fails (x86 build still works)
build log
```
[3174/5315] Building CXX obje…
-
如题,我想在转换模型的时候指定输入和输出节点的数据类型保持不变(比如均为float32),而不会因为转为rknn就强制变成float16或者int8,能做到吗
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_embedding_bag_cuda_float16&suite=Tes…
-
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_max_unpool2d_cuda_float16&suite=TestI…
-
### System Info
python==3.10.15
cuda==11.8-8.8.1
torch==2.4.0
The latest version of code
GPU A100_40G * 8
### Who can help?
@ziyuwan @Gebro13 @mengfn @gzqaq @YanSong97 @i
### Information
- …
-
pipe = PixArtAlphaPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.load_lora_weights("xxx")
When I want to load lora in PixArtAlphaPipeline, it throws this error:
AttributeError: …