-
Hello!
I'm very excited about using this library, however the README claims it is in a broken state, waiting for fixes in the CMA-ES repo.
I see the CMA-ES repo is more active, with many commits r…
-
## Bug Description
It works fine with static-shaped model, but fails to load with dynamic-shaped one.
```
DEBUG:torch_tensorrt.dynamo._compiler:Input graph: graph():
%arg0_1 : [num_users=…
-
### 🚀 The feature, motivation and pitch
Hi,
To warp some data according to a (batch) of affine transformations, two functions called sequentially need to be used:
1. [affine_grid](https://p…
-
### 🐛 Describe the bug
```python
@torch.compile
def forward(self, x):
"""Forward function."""
x = self.patch_embed(x)
Wh, Ww = x.size(2), x.size(3)
if …
bhack updated
4 months ago
-
### 🐛 Describe the bug
Compiling SwinTransformer forward at https://github.com/yoxu515/aot-benchmark/blob/paot/networks/encoders/swin/swin_transformer.py#L684
It is going to work correctly at trai…
bhack updated
4 months ago
-
微博内容精选
-
## Bug Description
```
DEBUG:torch_tensorrt.dynamo._compiler:Input graph: graph():
%linear_weight : [num_users=1] = get_attr[target=linear.weight]
%linear_bias : [num_users=1] = get_att…
-
### 🐛 Describe the bug
Error with attention mask in torch.nn.TransformerEncoderLayer
Easy example:
```
if __name__=="__main__":
import torch
model = torch.nn.TransformerEncoderLaye…
-
The following ops are using `ir.FallbackKernel` via `make_fallback()` in [lowering.py](https://github.com/pytorch/torchdynamo/blob/main/torchinductor/lowering.py#L894) and appear in benchmarks. We sh…
-
## Bug Description
I'm experimenting using TorchTRT with a model partitioned across two GPUs using pipeline parallelism techniques. Half of my network is on GPU0 and the second half is on GPU1. Wh…