microsoft / kernel-memory

RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.
https://microsoft.github.io/kernel-memory
MIT License
1.35k stars 252 forks source link

[Bug] Can't publish #498

Closed JohnGalt1717 closed 1 month ago

JohnGalt1717 commented 1 month ago

Context / Scenario

If you try and publish your dotnet project you get an error about llamasharp, which is a known issue with llamasharp.

I'm not using llamasharp, but it's included anyhow even in the .Core library.

dotnet publish -c Debug -r linux-x64

What happened?

Should build fine for publish.

Importance

I cannot use Kernel Memory

Platform, Language, Versions

all dotnet publish

Relevant log output

Found multiple publish output file
s with the same relative path: C:\Users\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\runtimes\linux-x64\native\avx\libllama.so, C:\Users\k\.nuget\packages\llam 
asharp.backend.cpu\0.11.2\runtimes\linux-x64\native\avx2\libllama.so, C:\Users\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\runtimes\linux-x64\native\avx512\libll 
ama.so, C:\Users\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\runtimes\linux-x64\native\libllama.so, C:\Users\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\r 
untimes\linux-x64\native\avx\libllava_shared.so, C:\Users\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\runtimes\linux-x64\native\avx2\libllava_shared.so, C:\Users 
\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\runtimes\linux-x64\native\avx512\libllava_shared.so, C:\Users\k\.nuget\packages\llamasharp.backend.cpu\0.11.2\run 
times\linux-x64\native\libllava_shared.so.
JohnGalt1717 commented 1 month ago

This bug is made even worse by the fact that there is no base package plus opt in packages for just what you need. The second you use .Core which isn't .Core at all, you get LLamaSharp whether you wanted it or not and everything else too.

I just need a root package with KernalMemoryBuilder and qdrant and I do the rest myself interfacing against LLama.cpp server with my own implementation.

If someone was using AzureAI, why would they need any of the other noise? .Core should be .Core, and if you want to have a .All, then great.

dluc commented 1 month ago

Fixed in 0.60.240517.1