-
I use the tensorRT8.4, when the engine inference have this error.
-
## Description
Hi maintainers,
I'm working on a project based on TensorRT named [Forward](https://github.com/Tencent/Forward/blob/master/README_EN.md), especially for the ONNX part. It's about d…
-
I was wondering if you have any references for inference on a TensorRT engine with batch size > 1. Any help would be great!
Thanks
-
Hey,
Following https://www.youtube.com/watch?v=x8ZtQ08A1F8, there is a support to run an inference on NN within a graph node on the IntelVino backend. Is there a support for the same operation …
-
Hello,
I have been testing out both the role play and original pipelines, and I am seeing random undesired data in the outputs.
To use a recent rp pipeline run as an example, I deleted everythin…
-
### 🚀 The feature, motivation and pitch
@thomwolf and i have an idea to implement llama from scratch in pure triton, inspired by karpathy. liger kernel already contains most of the kernels except m…
-
The first step is to define how a model will be specified to the engine. To get the ball rolling, here is how pyMC3 defines a model and runs inference:
```python
with pm.Model() as model:
mu …
EiffL updated
5 years ago
-
### Description of the bug:
which are supported ops in PT2E model, e.g. SiLU or GELU for conversion doesn't work at the moment, even though some are supported by tf-lite runtime e.g. GELU is support…
-
when i modify output size in UniAnimate_infer_long.yaml to (768, 1368) an error occured,
Traceback (most recent call last):
File "/home/UniAnimate/inference.py", line 18, in
INFER_ENGINE.…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…