pytorch / TensorRT

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
https://pytorch.org/TensorRT
BSD 3-Clause "New" or "Revised" License
2.57k stars 350 forks source link

🐛 [Bug] `require_full_compilation` never reaches partitioner #3171

Open dgcnz opened 1 month ago

dgcnz commented 1 month ago

I'm not sure if this is intended, but it seemed odd to me that there is logic inside both fast and global partitioners (see snippets 1 and 2) with require_full_compilation but the compile_module never feeds the passes the flag as parameter (see snippet 3).

Snippets

1. https://github.com/pytorch/TensorRT/blob/fa02fd3e3a85a6042d11a00cd386f6b69c1d6c4b/py/torch_tensorrt/dynamo/partitioning/_adjacency_partitioner.py#L206-L209

2. https://github.com/pytorch/TensorRT/blob/fa02fd3e3a85a6042d11a00cd386f6b69c1d6c4b/py/torch_tensorrt/dynamo/partitioning/_global_partitioner.py#L68-L71

3. https://github.com/pytorch/TensorRT/blob/fa02fd3e3a85a6042d11a00cd386f6b69c1d6c4b/py/torch_tensorrt/dynamo/_compiler.py#L371-L388

apbose commented 1 month ago

Thanks for the issue. Issue 3177 tracks this