-
## š Bug
[torch/benchmarks/dynamo](https://github.com/pytorch/pytorch/blob/97ff6cfd9c86c5c09d7ce775ab64ec5c99230f5d/benchmarks/dynamo/common.py#L2028) testing suit sets SGD as the optimizer and set ā¦
-
## š Bug
In the example below, we have 2 tensors: `t0` and `t1`. `t1` is created from a DLPack capsule generated from `t0`. So, we could say they share the same storage. However, after modifying `tā¦
-
I'm attempting to run Kohya under Linux, on Kubuntu 24.04, (with a 4080 Super & Ryzen 5900x), but I keeping having issues regarding it not finding tensorrt.
I'm using pyenv to use the required Pyā¦
-
Does anyone know if these pre-trained models can be converted to TensorRT engine executables? Or JIT'd by XLA for NVIDIA GPUs?
-
expose a way to save out a computation graph, accompanied by all constants parameters and model weights as relative-path hlo format files, xla-rs has utils for this and its the expected format for modā¦
-
After commit https://github.com/openxla/xla/commit/d8f0c1acdb79c18cdce0a050b1d7c6baa8b9f14b, building XLA fails for CUDA backend. Reproducer:
```sh
$ ./configure.py --backend=CUDA --cuda_compiler=NVā¦
pearu updated
3 months ago
-
### Description of the bug:
I was trying to use the latest commit version to convert gemma2 since it seems like 0.2.0 doesn't support it. However, I can't even import it:
```python3
Traceback (mosā¦
-
### Describe the issue:
https://github.com/pymc-devs/pytensor/pull/133 In this PR, the second branch of this conditional:
https://github.com/pymc-devs/pytensor/blob/d175203b4e00f48db9c61b68a5f7026ā¦
-
This issue will be used to track the work for zero-copy between CUDA and XLA.
Inspired by
- https://github.com/pytorch/pytorch/blob/f20e3ae0c36146c962a5665018e9ad662a7cf211/aten/src/ATen/DLConverā¦
-
Right now, the program crashes whenever you make a tensor on an XLA device and call a `LazyTensorBarrier()` without setting `wait` to `true`. This can be bypassed by using `LazyTensorBarrier(wait: truā¦