SciSharp / LLamaSharp

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
https://scisharp.github.io/LLamaSharp
MIT License
2.64k stars 345 forks source link

LLamaSharp v0.15.0 broke cuda backend #909

Open SymoHTL opened 2 months ago

SymoHTL commented 2 months ago

Description

i have a linux server with a Quadro RTX4000, i have installed drivers, my app runs in a docker container as base image i used: FROM nvidia/cuda:12.5.0-runtime-ubuntu22.04 AS base in v0.13.0 it worked with gguf models and gpu support, but now i wanted to run it with LLama3.1 so i need to upgrade to v0.15.0 after upgrading it cant load the library anymore, if i install only the cpu backend it works but well my server has a gpu for a reason

Edit: full error

martindevans commented 2 months ago

Can you try testing with the current master branch? We've just merged in new binaries which will be the 0.16.0 release soon.

SymoHTL commented 2 months ago

how can i do that

martindevans commented 2 months ago

Just clone this repo and build an application to run in your server environment (e.g. one of the examples).

SymoHTL commented 2 months ago

so just run an example? is it preconfigured with cuda?

martindevans commented 2 months ago

By default the examples have WithCuda() called in the initial setup (see here).

SymoHTL commented 2 months ago

hmm, i tried code assistant but it ran on gpu image

Edit: wait, i dont have cuda installed, only in the docker container

SymoHTL commented 2 months ago

ok now the gpu is working, but just at like 25%, how can i now test the master branch in my app?

aropb commented 2 months ago

@martindevans Please fix this bug in release 0.16.0: https://github.com/SciSharp/LLamaSharp/issues/891

Otherwise, I'll stay on 0.13.0 and KM 0.62.240605.1 :)

SymoHTL commented 2 months ago

why are you tagging him here for another issue?

martindevans commented 2 months ago

Please fix this bug in release

It's an open source project, issues will get fixed when someone who wants them fixed puts in the work!

how can i now test the master branch in my app?

Easiest way is probably to remove the nuget reference from your main project, and add a reference to your cloned copy of LLamaSharp.

aropb commented 2 months ago

I'm sorry if I broke the rules.

SymoHTL commented 2 months ago

when will v0.16.0 be released?

martindevans commented 2 months ago

Hopefully this weekend. I'm going to be busy for the rest of September so I want to get it released before then if possible.

SymoHTL commented 2 months ago

hmm it is running now on 0.16.0 but its not working in docker, it works fine without it tho, are the libraries maybe not copied correctly?

edit: my docker image is the nvidia one set up with cuda compose also redirects the gpu


# Install .NET dependencies
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    wget \
    apt-transport-https && \
    wget https://packages.microsoft.com/config/ubuntu/22.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb && \
    dpkg -i packages-microsoft-prod.deb && \
    apt-get update && \
    apt-get install -y --no-install-recommends \
    aspnetcore-runtime-8.0 \
    libxml2 && \
    rm -rf /var/lib/apt/lists/*

WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["WebUi/WebUi.csproj", "WebUi/"]
COPY ["Infrastructure/Infrastructure.csproj", "Infrastructure/"]
COPY ["Application/Application.csproj", "Application/"]
COPY ["Domain/Domain.csproj", "Domain/"]
RUN dotnet restore "WebUi/WebUi.csproj"
COPY . .
WORKDIR "/src/WebUi"
RUN dotnet build "WebUi.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "WebUi.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .

ENTRYPOINT ["dotnet", "WebUi.dll"]

image

martindevans commented 2 months ago

I don't personally know much about docker, but I know some people have reported issues before with the binaries not loading in certain docker environments. In those cases I think it was due to missing dependencies.

Try cloning llama.cpp inside the container and compiling it, then using those binaries (ensure you use exactly the right version, see the bottom of the readme).