SciSharp / LLamaSharp

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
https://scisharp.github.io/LLamaSharp
MIT License
2.54k stars 335 forks source link

about NVidia GPU use example #611

Open CrazyJson opened 6 months ago

CrazyJson commented 6 months ago

I have an RTX 4060 graphics card, how do I deploy a gpu version of the model with this project

CrazyJson commented 6 months ago

image image

martindevans commented 6 months ago

You need a gguf model file to use llama.cpp, not safetensors.

CrazyJson commented 6 months ago

image Thanks, I understand llama.cpp is used to load the quantized gguf model. One more question, which parameter in the sample code is used to enable the use of the local GPU, and how to choose which local gpu to use

ChengYen-Tang commented 6 months ago

@CrazyJson , You need install coda in your pc if you install cuda11, you should be choose this package cuda12 use this package

I'm not sure if OpenCL supports Intel graphics cards

ZCOREP commented 1 month ago

i have the same problem, i download and install Cuda12 , but not use my GPU still use RAM!

martindevans commented 1 month ago

do you have the cuda toolkit installed? You need that to supply the CUDA runtime packages

ZCOREP commented 1 month ago

yes i did