ggerganov / llama.cpp

LLM inference in C/C++
MIT License
67.84k stars 9.73k forks source link

Feature Request: OpenVINO backend support request #9601

Closed aropb closed 6 days ago

aropb commented 1 month ago

Prerequisites

Feature Description

OpenVINO backend support request: https://docs.openvino.ai/2024/index.html

Motivation

Rich possibilities of using CPU, GPU, VPU, NPU

Possible Implementation

No response

BarfingLemurs commented 1 month ago

Rich possibilities of using CPU, GPU, VPU, NPU

SYCL (Intel) is supported in llama.cpp

kylo5aby commented 1 month ago

Rich possibilities of using CPU, GPU, VPU, NPU

SYCL (Intel) is supported in llama.cpp

I think one benefit of supporting the OpenVINO backend is to enable Intel NPU

github-actions[bot] commented 6 days ago

This issue was closed because it has been inactive for 14 days since being marked as stale.