-
### 🔍 Before submitting the issue
- [X] I have searched among the existing issues
- [X] I am using a Python virtual environment
### 🐞 Description of the bug
when using the argument `dry_run` no out…
-
### 🐛 Describe the bug
The current implementation of PyTorch does not support deterministic algorithms for the fp8_e4m3 and fp8_e3m4 formats.
```
>>> import torch
>>> torch.__version__
'2.4.0…
-
### Required prerequisites
- [X] I have read the documentation .
- [X] I have searched the [Issue Tracker](https://github.com/PKU-Alignment/safe-rlhf/issues) and [Discussions](https://github.com/PKU-…
-
### Your current environment
The output of `python collect_env.py`
```text
--2024-09-26 15:08:57-- https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
Resolving raw.github…
-
### 🐛 Describe the bug
Re-inplace pass in post-grad should support the reinplace of customized kernel.
Take this as example: https://gist.github.com/leslie-fang-intel/d62da8d11192e0565ad3e3739f1bd…
-
### Checklist
- [X] I added a descriptive title
- [X] I searched open reports and couldn't find a duplicate
### What happened?
As part of PyTorch release workflow, we build conda release for…
-
### 🐛 Describe the bug
Log:
```
File "/home/gta/penghuic/pytorch_stock/third_party/torch-xpu-ops/test/xpu/../../../../test/test_content_store.py", line 34, in test_basic
writer.write_tensor(…
-
Migrated from [rt.perl.org#125296](https://rt-archive.perl.org/perl5/Ticket/Display.html?id=125296) (status was 'open')
Searchable as RT125296$
p5pRT updated
3 weeks ago
-
### 🐛 Describe the bug
Testing a variety of TP `requires_grad` patterns (validating maximally flexible finetuning) revealed `DTensor` sharding propagation of `aten.native_layer_norm_backward` (defaul…
-
### 🐛 Describe the bug
We discovered this incidentally because when Jax is imported, it imports xla, which adds a process level hook that issues a warning if os.fork is used (which is is in some indu…