Open geekboood opened 8 months ago
Could you check your model with OpenVINO benchmark app: https://docs.openvino.ai/2023.3/openvino_sample_benchmark_tool.html ?
Run with -d GPU
option to run on GPU.
Please also share the command you use to start OVMS.
@geekboood please provide information about Linux kernel and GPU driver versions
My environment is pretty complicated... My Host server uses Debian, and i915 kernel driver. I passthrough the GPU to LXC container that installed ubuntu 22.04 Intel GPU dependencies. And I run multiple models on a single GPU (I tweaked the compute runtime parameter to use Multi-CCS Modes which should be helpful), each model is part of a inference pipeline. When the pipeline goes through high loads, sometimes model server hangs.
Describe the bug Inference hangs when using A770
Logs server logs
kernel logs
Configuration OpenVINO Model Server 2023.3.4e91aac76 OpenVINO backend 2023.3.0.13775.ceeafaf64f3 Bazel build flags: --strip=always --define MEDIAPIPE_DISABLE=0 --cxxopt=-DMEDIAPIPE_DISABLE=0 --define PYTHON_DISABLE=1 --cxxopt=-DPYTHON_DISABLE=1