-
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf 2.16.2
### Custom code
No
### OS platform and …
-
Platforms: inductor, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_addr_cuda_float64&suite=TestIndu…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_stft_cuda_float64&suite=TestInductorOpInfoCUDA&lim…
-
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_hypot_cuda_float64&suite=TestInductorOpInfoCUDA&lim…
-
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_scatter_reduce_prod_cuda_float16&suite=TestI…
-
**Describe the bug**
When attempting to compile the YOLOv8 model using the NPU, a RuntimeError occurs.
```
---------------------------------------------------------------------------
RuntimeErro…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_linalg_cholesky_ex_cuda_float32&suite=TestInduc…
-
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_addbmm_cuda_float16&suite=TestInductorOpInfo…
-
ONNX has evolved into much more than just a specification for exchanging models. Here's a breakdown of why:
ONNX Runtime: A highly optimized inference engine that executes ONNX models. This activel…
-
This is an issue started from the following PR #192
Ping @mandel.