-
### Describe the bug
A clear and concise description of what the bug is.
使用mmrazor/main/tools/visualizations/feature_diff_visualization.py对mmdetection中faster-rcnn可视化时,提示需有frame_id,video_len。原因在于…
-
**Description**
We found that the performance of triton+tensorrt under stable QPS and uneven QPS is very different. As follows:
- uneven QPS
(1) QPS
![image](https://github.com/triton-inference-se…
-
Original Issue: https://github.com/tensorflow/tensorflow/issues/59232
Opening on behalf of @DerEchteFeuerpfeil
# Description of the issue
Okay so to preface this, I have been working on this for …
-
### What version of Hono are you using?
4.6.8
### What runtime/platform is your app running on?
Bun
### What steps can reproduce the bug?
```typescript
type UserContext = {
Variables: {
…
-
I am following the tutorial for inference with some pretrained models, however I struggle to see the test dataset, where is this file and what type of infromation is neded for inference?
In the othe…
-
Hi there,
I can't seem to find any examples that show how to create template features. We are trying to dock two proteins, one of which has an experimentally determined structure. The default inferen…
-
Nodes converted.
onnx_layernorm_fuse_pass done!
onnx_gelu_fuse_pass done!
replace_div_to_scale_pass done!
Exporting inference model from python code ('/home/shihuiyu/yolov7-main/yolov7-tiny_infer/…
-
### bug描述 Describe the Bug
我使用cpp版本的paddle inference推理库运行一个从torch转过来的模型时。使用CPU推理可以运行,但使用GPU和XPU推理时均报错。同时该cpp版本的paddle inference推理库在使用paddleseg原生的paddle模型时是可以用GPU推理的。debug后显示代码卡死在predictor->Run();这一步。…
-
**Description**
I want to build a docker image of triton in CPU-ONLY mode.
I followed [this](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/customization_guide/build.h…
-
### 🥰 Feature Description
Please consider adding the ability to display the inference speed for each interaction with the AI model.
### 🧐 Proposed Solution
This could be presented in a f…