-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### YOLOv8 Component
Pose Predict
#…
-
Threw a pretty [easy PR](https://github.com/pixray/pixray/pull/52) up which implements the feature described by post title:
I know it's not perfect as I think maybe other non-pixray libs may still …
-
Hi everyone,
This is a **Common Issue Summary** where I will compile the frequently encountered issues. If you notice any omissions, please feel free to help add to the list. Thank you!
这里是**常见问…
-
### Problem Description
I've run into an interesting problem with OpenCL host-side command queues when profiling is enabled _and_ more then one GPU is used.
Some background:
* My application …
-
As mentioned the end of https://github.com/triton-inference-server/server/issues/6981
triton: nvcr.io/nvidia/tritonserver 23.12-py3
I have 4 GPUs, and my model is ensemble model, I don't set gp…
-
I did look up how the h264 code (freerdp/src/FreeRDP-2.5.0/libfreerdp/codec/h264_ffmpeg.c) uses Vaapi currently and it will always select the DRI device 128:
```
#ifdef WITH_VAAPI
#define VAAPI_DEV…
-
Hi there! I'm trying to serve multiple TensorRT-LLM models and I'm wondering what the recommended approach is. I'm using Python to serve TensorRT-LLM models. I've tried / considered:
- `GenerationS…
-
Running on 1xH100 with latest docker container from docker hub
```
>>> fast_pipe = optimum_pipeline('text-generation', 'meta-llama/Meta-Llama-3-8B-Instruct', use_fp8=True)
Special tokens have bee…
-
This line in my custom recipe does not work (the only one that I have added):
from torchtune.datasets import text_completion_dataset
When I run tune, the message is:
ImportError: cannot import n…
-
When running on HPG, the print output that gives us "Training batch is on device _" is only reading "device 0". Is this missing computations on the other GPU (i.e. "device 1"), which the program state…