-
### 🐛 Describe the bug
torchbench_amp_bf16_training
xpu train doctr_det_predictor
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benc…
-
### 🐛 Describe the bug
torchbench_amp_bf16_training
xpu train doctr_reco_predictor
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benc…
-
### 🐛 Describe the bug
torchbench_amp_bf16_inference
xpu eval hf_clip
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/ben…
-
### 🐛 Describe the bug
torchbench_amp_bf16_inference
- [ ] `detectron2_fasterrcnn_r_101_c4`
- [ ] `detectron2_fasterrcnn_r_101_dc5`
- [ ] `detectron2_fasterrcnn_r_101_fpn`
- [ ] `detectron2_fas…
-
We have attempted to run tutorials/peft-curation-with-sdg and facing runtime errors, details are mentioned below with the environment setup information we tried.
```
python ./main.py \
--api-key…
-
### 🐛 Describe the bug
torchbench_amp_bf16_training
xpu train torchrec_dlrm
ERROR:common:
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-op…
-
### 🐛 Describe the bug
torchbench_amp_fp16_training
xpu train dlrm
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", li…
-
### 🐛 Describe the bug
torchbench_amp_bf16_training
- [ ] `detectron2_fasterrcnn_r_101_c4`
- [ ] `detectron2_fasterrcnn_r_101_dc5`
- [ ] `detectron2_fasterrcnn_r_101_fpn`
- [ ] `detectron2_fast…
-
### 🐛 Describe the bug
torchbench_amp_bf16_inference
- [ ] `sam_fast`
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.p…
-
### 🐛 Describe the bug
I'm trying to add micro-benchmark for flex attention, which is implemented by HOP. I use ```torch.utils.flop_counter.FlopCounterMode```, but it doesn't support capture FLOP f…