-
## š Feature
We currently only have `xla` device to access LazyTensor. Since we are extending LazyTensor support to GPUs, it's not necessary to fallback on CPU when we run into LTC-incompatible opeā¦
-
## Rationale
Users should be able to pass `Memory` to methods that take in `IEnumerable` similar to array. Currently, this requires calling ToArray() to copy the contents of the `Memory` into an arraā¦
-
Super quick user question on ReactPhysics3D if you have a time...
I'm looking through the RP3D implementation and I notice that the inertia tensor in local space coodinates (ie `mLocalInertiaTensorā¦
-
Is there a way to get the input shapes from ```torch::jit::script::Module```? For example, if we want to resize the input image to the correct shape to perform classification using the traced module, ā¦
-
### š Describe the bug
`torch.Tensor.flipud` causes heap buffer overflow with specific input.
Test code:
```python
import torch
base = torch.randn(2,2)
self = torch.quantize_per_tensor(base,ā¦
-
I am using the following code to seamlessly switch between variational, MAP and ML approaches, something that I found useful for prototyping new variational inference ideas:
```python
class Impropā¦
-
### Feature Description
Instead of aggregating the output of segmentation attributions (using e.g. one-hot encoding) we can also aggregate the inputs for the spatial case.
### Use Case
This use casā¦
-
While looking into the WebGPU backend and execution example I am left with a few questions.
I am currently working on porting the[ Open Image Denoise](https://github.com/RenderKit/oidn) models to wā¦
-
I try to transform the custom ops to onnx as the nms set. I define the static symbolic method. when the output is single, I got the custom onnx node. however, when the number of output is more than onā¦
-
I'm trying to inference ONNX model created from lightgbm model via Kotlin DL and in every method (tried Raw ones too) i'm getting `class [J cannot be cast to class [[F ([J and [[F are in module java.bā¦