microsoft / kernel-memory

RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.
https://microsoft.github.io/kernel-memory
MIT License
1.49k stars 287 forks source link

SharpLLama support - AskAsync never returns answer #195

Closed vshapenko closed 7 months ago

vshapenko commented 9 months ago

@dluc , as you are developer of kernel memory, can you provide some sample of MemoryServerless based on LLamaSharp? I am trying to make it work (by getting code for text generator from https://github.com/microsoft/kernel-memory/pull/192), but don't have much luck.AskAsync goes to infinite state

Here is my code: ' open System open LLama open LLama.Common open LLamaSharp.KernelMemory open Microsoft.FSharp.Core open Microsoft.KernelMemory open Microsoft.KernelMemory.AI open Microsoft.KernelMemory.Handlers open Microsoft.KernelMemory.MemoryStorage.Qdrant let memoryBuilder = KernelMemoryBuilder() let inferenceParams = new InferenceParams(AntiPrompts = [|"<|end_of_turn|>"|]) let llamaConfig = new LLamaSharpConfig("/Users/codechanger/llama/openchat_3.5.Q5_K_M.gguf") llamaConfig.DefaultInferenceParams <-inferenceParams llamaConfig.ContextSize<- 4096u

type Generator(config: LLamaSharpConfig) =
let modelParams = new ModelParams(config.ModelPath)
do modelParams.ContextSize<- config.ContextSize
let weights = LLamaWeights.LoadFromFile(modelParams);
let embedder= new LLamaEmbedder(weights, modelParams)
let context = weights.CreateContext(modelParams)

interface ITextEmbeddingGenerator with
member this.CountTokens(text) = context.Tokenize(text).Length
member this.GenerateEmbeddingAsync(text, cancellationToken) =
task{
let embeddings = embedder.GetEmbeddings(text)
return Embedding(embeddings)
}

member this.MaxTokens =int (modelParams.ContextSize.GetValueOrDefault())
// member this.(data, kernel, cancellationToken) =
//     }
// member this.Attributes = Dictionary<string,obj>()
type TextGenerator(config: LLamaSharpConfig) =
let modelParams = new ModelParams(config.ModelPath)
do modelParams.ContextSize<- config.ContextSize
let weights = LLamaWeights.LoadFromFile(modelParams)

interface ITextGenerator with
member this.CountTokens(text) =
use context = weights.CreateContext(modelParams)
context.Tokenize(text).Length
member this.GenerateTextAsync(prompt, options, cancellationToken) =
let parameters = InferenceParams()
parameters.Temperature <-float32 options.Temperature
parameters.AntiPrompts <- options.StopSequences |> Seq.toArray
parameters.TopP <- float32 options.TopP
parameters.PresencePenalty <- float32 options.PresencePenalty
parameters.FrequencyPenalty <- float32 options.FrequencyPenalty
parameters.MaxTokens <- options.MaxTokens.GetValueOrDefault()
let executor = new StatelessExecutor(weights, modelParams)
executor.InferAsync(prompt)
member this.MaxTokenTotal = int (config.ContextSize.GetValueOrDefault())

let mb = memoryBuilder
.WithCustomEmbeddingGenerator(Generator(llamaConfig))
.WithCustomTextGenerator(TextGenerator(llamaConfig))
.With(new TextPartitioningOptions(MaxTokensPerParagraph = 300, MaxTokensPerLine = 100, OverlappingTokens = 30))

let kernelMemory = mb.Build()

task{
let! doc = kernelMemory.ImportDocumentAsync("/Users/codechanger/test.txt")
Console.WriteLine(doc)
let! res = kernelMemory.AskAsync("autumn")
Console.WriteLine(res.Result)
}|> ignore

Console.ReadLine() |> ignore

`

test.txt itself is very simple: Autumn is sad season

I am using Bloke model for openchat 3.5

dluc commented 9 months ago

@vshapenko could you retry with the latest code in the PR? The initial code was not setting max_tokens, and LLama would keep generating tokens without stopping. Now "max token" is set using SearchClientConfig.AnswerTokens, so the text generation will stop.

About the example, see LlamaSharpTextGeneratorTest.ItGeneratesText or example 003, e.g.

var memory = new KernelMemoryBuilder()
    .WithLlamaTextGeneration(llamaConfig)
    .WithAzureOpenAITextEmbeddingGeneration(azureOpenAIEmbeddingConfig, new DefaultGPTTokenizer())
    .Build<MemoryServerless>();

Note that depending on your device, you might be getting only few tokens per second. On a MBP 2019 (Intel) for instance, LlamaSharpTextGeneratorTest.ItGeneratesText with openchat_3.5.Q5_K_M.gguf takes ~30 secs to complete.

vshapenko commented 9 months ago

@dluc , i made additional investigations and it looks lie the problem is inside SearchClient. My text generator generates text, but it is not accessible through AskAsync for some reason. I will examine search client and come back to you with my results

dluc commented 9 months ago

here's an example: https://github.com/microsoft/kernel-memory/blob/main/examples/105-dotnet-serverless-llamasharp/Program.cs

Please note: depending on hardware, it can take up to 3+ minutes to complete (or just few seconds).

vvdb-architecture commented 8 months ago

I can confirm that with the latest version, and with models openchat_3.5.Q5_K_M.gguf, ggml-model-q4_0.gguf and kai-7b-instruct.Q5_K_M.gguf, AskAsync never returns.

I also noticed that even though the cuda12 version of llama.cpp is loaded, the graphic card is essentially idle.

Example to reproduce:

using ConsoleApp1;
using LLama.Native;
using Microsoft.KernelMemory;
using Microsoft.KernelMemory.ContentStorage.DevTools;
using Microsoft.KernelMemory.FileSystem.DevTools;
using System.Diagnostics;

var llamaSharpConfig = new LlamaSharpConfig
{
    ModelPath = @"D:\Source\km\Data\ggml-model-q4_0.gguf",
};

var searchClientConfig = new SearchClientConfig
{
    MaxMatchesCount = 2,
    AnswerTokens = 100,
};

NativeLibraryConfig.Instance.WithLogs();

// Memory setup, e.g. how to calculate and where to store embeddings
var kernelMemoryBuilder = new KernelMemoryBuilder()
    .WithSearchClientConfig(searchClientConfig)
    .WithLlamaTextGeneration(llamaSharpConfig)
    .WithCustomEmbeddingGenerator(new TextEmbeddingGenerator(llamaSharpConfig))
    .WithCustomTextGenerator(new TextGenerator(llamaSharpConfig))
    .WithSimpleFileStorage(new SimpleFileStorageConfig { StorageType = FileSystemTypes.Disk, Directory = @"D:\Source\km\Weights" })
    ;

var memory = kernelMemoryBuilder.Build(); // this is the same as Build<MemoryServerless>()

// Some sci - fi content based on a recent news from ISS
var story = @"
            A strange and surprising event transpired upon the celestial manmade globe - the International Space Station. A vegetable of the red fruit variety, otherwise known on our terrestrial land as a 'tomato', was cultivated with the remarkable method of hydroponics, defying the hitherto believed necessity of soil for growth, and subsequently misplaced by the American Voyager, Mr. Frank Rubio.
            As trivial as it may seem, the plantation of this tomato held great significance, being the inaugural produce of its kind to flourish in the challenging conditions of the cosmos, and its inexplicable disappearance made for a comical investigation of sorts. Mr. Rubio, convinced of its safekeeping, found the prize fruit astray and upon his return to the Earth, the bewitching mystery of the vanishing tomato persisted.
            Much to the disquiet of Rubio, accusations of him having consumed the invaluable specimen disquieted the floating abode. He vehemently refuted the charges, attributing the disappearance to the curious character of the conditions in space, where objects not securely affixed could easily drift into unforeseen corners of the spacious station. Despite his rigorous search, the tomato evaded discovery.
            This incident of mirth, notwithstanding, Mr. Rubio's sojourn in space did not stay deprived of notable triumph. His stay in this amidst the heavenly spheres reached a duration hitherto unknown to any American voyager, marking a full Earth-year in space. Rendered longer owing to an unfortunate leak detected in his Russian Soyuz spacecraft, it proved to be a challenging, yet rewarding journey for Rubio.
            A resolution to the tale of the missing tomato finally came not during Mr. Rubio's stay, but with the revelation of the crew remaining in the station of the discovery of the missing specimen. Thus, even after returning to the terrestrial sphere, the voyager's innocence was ultimately affirmed, adding a closing chapter to this historical space oddity.
            Alas, despite the humour this event bequeathed, the great strides made in the science of celestial agriculture cannot be understated. The successful cultivation of a tomato under such harsh conditions bodes well for future endeavours of similar nature, serving as a promising beacon of mankind's progress against the unique challenges that space exploration poses.
            Id est, Rubio's 'lost in space' tomato sparks a shift from jest to marvel, creating a newfound appreciation for the advancements in scientific know-how, that led to the cultivation, and eventual rediscovery of a humble fruit in space.
            Mindful of the peculiar incident, the space administration contrived to install advanced object-tracking systems within the Space Station to avoid recurrent miscellany loss. A new regimen was also introduced to ensure that harvested produce was promptly accounted for and preserved, preventing any further produce-related mysteries.
            Simultaneously, this whimsical incident spurred a new stream of scientific study centered around the longevity and preservation of biotic material in a microgravity environment. Scientists discovered that the space-cultivated tomato, despite its desiccated state, presented unique characteristics not found in its Earth-grown counterparts.
            Detailed analysis revealed heightened concentrations of lycopene in the space-grown tomato, a potent antioxidant known for its numerous health benefits including reducing the risk of heart diseases and cancer. It was debated whether these enhanced features were a byproduct of the tomato's prolonged exposure to cosmic radiation or the unique hydroponic growth methodology adopted on the space station.
            Additionally, the longevity of the tomato in an un-refrigerated state sparked interest in bio-engineering crops for greater longevity on Earth, with potential implications for reducing food waste. The space life of the tomato, in all its humour and seriousness, may mark the beginning of far-reaching advancements in botanical sciences and space exploration.
            In a surprising twist to the tale, around the time the elusive tomato was found, the crew on the space station also stumbled upon something extraordinary — an unidentified substance found growing alongside the microgravity tomatoes. Initially thought to be a mold or fungus, subsequent analysis revealed an organic composition unlike anything known to Earth-bound biology.
            Appearing as a glowing, translucent mold, this substance showed a remarkable rate of growth and exhibited photosynthetic properties, drawing energy not just from sunlight, but also from other forms of radiation. It was able to adapt quickly to the environmental conditions of the space station, including its high CO2 levels.
            Gerald Marshall, the Chief Scientist on the team at NASA, said during a press briefing, ""Our initial findings lead us to believe the matter is not terrestrial. Its unprecedented radiant energy conversion efficiency and adaptability are akin to, but far exceed, those seen in extremophile organisms on Earth. We are eager to undertake a comprehensive study and certainly, this could potentially mark a new chapter in astrobiological research.""
            While further studies are underway, this intriguing finding sparked a flurry of interest and speculation within and outside the scientific community. This new organic matter, playfully named ‘Rubio's Radiant Mold’ in honor of astronaut Frank Rubio, could potentially reshape our understanding of life in the cosmos and further blur the lines between science fiction and reality. With each passing day, the 'final frontier' appears to become more familiar and intriguingly alien at the same time.
            ";

var sw = Stopwatch.StartNew();
await memory.ImportTextAsync(story, documentId: "tomato01");

sw.Stop();
Console.WriteLine($"Document indexed in {sw.Elapsed}");

var question = "What happened to the tomato disappeared on the International Space Station?";
Console.WriteLine($"Question: {question}");
sw.Restart();

var answer = await memory.AskAsync(question);
Console.WriteLine($"Answer: {answer.Result}");

sw.Stop();
Console.WriteLine($"Question answered in {sw.Elapsed}");

await memory.DeleteDocumentAsync("tomato01");

The output is:

[LLamaSharp Native] [Info] NativeLibraryConfig Description:
- Path:
- PreferCuda: True
- PreferredAvxLevel: AVX2
- AllowFallback: True
- SkipCheck: False
- Logging: True
- SearchDirectories and Priorities: { ./ }
[LLamaSharp Native] [Info] Detected OS Platform: WINDOWS
[LLamaSharp Native] [Info] Detected cuda major version 12.
[LLamaSharp Native] [Info] ./runtimes/win-x64/native/cuda12/libllama.dll is selected and loaded successfully.
llama_model_loader: loaded meta data with 21 key-value pairs and 201 tensors from D:\Source\km\Data\ggml-model-q4_0.gguf (version GGUF V2)
llama_model_loader: - tensor    0:                    output.weight q6_K     [  2048, 32000,     1,     1 ]
llama_model_loader: - tensor    1:                token_embd.weight q4_0     [  2048, 32000,     1,     1 ]
llama_model_loader: - tensor    2:           blk.0.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    3:            blk.0.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor    4:            blk.0.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor    5:              blk.0.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor    6:            blk.0.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    7:              blk.0.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor    8:         blk.0.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor    9:              blk.0.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   10:              blk.0.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   11:           blk.1.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   12:            blk.1.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   13:            blk.1.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   14:              blk.1.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   15:            blk.1.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   16:              blk.1.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   17:         blk.1.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   18:              blk.1.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   19:              blk.1.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   20:          blk.10.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   21:           blk.10.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   22:           blk.10.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   23:             blk.10.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   24:           blk.10.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   25:             blk.10.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   26:        blk.10.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   27:             blk.10.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   28:             blk.10.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   29:          blk.11.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   30:           blk.11.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   31:           blk.11.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   32:             blk.11.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   33:           blk.11.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   34:             blk.11.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   35:        blk.11.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   36:             blk.11.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   37:             blk.11.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   38:          blk.12.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   39:           blk.12.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   40:           blk.12.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   41:             blk.12.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   42:           blk.12.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   43:             blk.12.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   44:        blk.12.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   45:             blk.12.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   46:             blk.12.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   47:          blk.13.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   48:           blk.13.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   49:           blk.13.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   50:             blk.13.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   51:           blk.13.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   52:             blk.13.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   53:        blk.13.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   54:             blk.13.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   55:             blk.13.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   56:          blk.14.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   57:           blk.14.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   58:           blk.14.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   59:             blk.14.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   60:           blk.14.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   61:             blk.14.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   62:        blk.14.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   63:             blk.14.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   64:             blk.14.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   65:          blk.15.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   66:           blk.15.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   67:           blk.15.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   68:             blk.15.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   69:           blk.15.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   70:             blk.15.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   71:        blk.15.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   72:             blk.15.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   73:             blk.15.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   74:          blk.16.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   75:           blk.16.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   76:           blk.16.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   77:             blk.16.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   78:           blk.16.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   79:             blk.16.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   80:        blk.16.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   81:             blk.16.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   82:             blk.16.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   83:          blk.17.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   84:           blk.17.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   85:           blk.17.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   86:             blk.17.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   87:           blk.17.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   88:             blk.17.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   89:        blk.17.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   90:             blk.17.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   91:             blk.17.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   92:          blk.18.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   93:           blk.18.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   94:           blk.18.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   95:             blk.18.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   96:           blk.18.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   97:             blk.18.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   98:        blk.18.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   99:             blk.18.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  100:             blk.18.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  101:          blk.19.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  102:           blk.19.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  103:           blk.19.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  104:             blk.19.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  105:           blk.19.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  106:             blk.19.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  107:        blk.19.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  108:             blk.19.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  109:             blk.19.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  110:           blk.2.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  111:            blk.2.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  112:            blk.2.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  113:              blk.2.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  114:            blk.2.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  115:              blk.2.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  116:         blk.2.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  117:              blk.2.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  118:              blk.2.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  119:          blk.20.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  120:           blk.20.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  121:           blk.20.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  122:             blk.20.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  123:           blk.20.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  124:             blk.20.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  125:        blk.20.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  126:             blk.20.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  127:             blk.20.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  128:          blk.21.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  129:           blk.21.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  130:           blk.21.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  131:             blk.21.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  132:           blk.21.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  133:             blk.21.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  134:        blk.21.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  135:             blk.21.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  136:             blk.21.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  137:           blk.3.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  138:            blk.3.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  139:            blk.3.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  140:              blk.3.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  141:            blk.3.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  142:              blk.3.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  143:         blk.3.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  144:              blk.3.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  145:              blk.3.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  146:           blk.4.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  147:            blk.4.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  148:            blk.4.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  149:              blk.4.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  150:            blk.4.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  151:              blk.4.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  152:         blk.4.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  153:              blk.4.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  154:              blk.4.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  155:           blk.5.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  156:            blk.5.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  157:            blk.5.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  158:              blk.5.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  159:            blk.5.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  160:              blk.5.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  161:         blk.5.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  162:              blk.5.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  163:              blk.5.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  164:           blk.6.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  165:            blk.6.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  166:            blk.6.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  167:              blk.6.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  168:            blk.6.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  169:              blk.6.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  170:         blk.6.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  171:              blk.6.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  172:              blk.6.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  173:           blk.7.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  174:            blk.7.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  175:            blk.7.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  176:              blk.7.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  177:            blk.7.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  178:              blk.7.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  179:         blk.7.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  180:              blk.7.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  181:              blk.7.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  182:           blk.8.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  183:            blk.8.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  184:            blk.8.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  185:              blk.8.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  186:            blk.8.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  187:              blk.8.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  188:         blk.8.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  189:              blk.8.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  190:              blk.8.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  191:           blk.9.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  192:            blk.9.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  193:            blk.9.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  194:              blk.9.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  195:            blk.9.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  196:              blk.9.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  197:         blk.9.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  198:              blk.9.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  199:              blk.9.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  200:               output_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_0:  155 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_layer          = 22
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 5632
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 1.10 B
llm_load_print_meta: model size       = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name   = models
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 2 '</s>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.07 MiB
llm_load_tensors: mem required  =  606.60 MiB
.......................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_model_loader: loaded meta data with 21 key-value pairs and 201 tensors from D:\Source\km\Data\ggml-model-q4_0.gguf (version GGUF V2)
llama_model_loader: - tensor    0:                    output.weight q6_K     [  2048, 32000,     1,     1 ]
llama_model_loader: - tensor    1:                token_embd.weight q4_0     [  2048, 32000,     1,     1 ]
llama_model_loader: - tensor    2:           blk.0.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    3:            blk.0.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor    4:            blk.0.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor    5:              blk.0.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor    6:            blk.0.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    7:              blk.0.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor    8:         blk.0.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor    9:              blk.0.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   10:              blk.0.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   11:           blk.1.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   12:            blk.1.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   13:            blk.1.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   14:              blk.1.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   15:            blk.1.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   16:              blk.1.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   17:         blk.1.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   18:              blk.1.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   19:              blk.1.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   20:          blk.10.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   21:           blk.10.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   22:           blk.10.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   23:             blk.10.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   24:           blk.10.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   25:             blk.10.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   26:        blk.10.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   27:             blk.10.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   28:             blk.10.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   29:          blk.11.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   30:           blk.11.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   31:           blk.11.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   32:             blk.11.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   33:           blk.11.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   34:             blk.11.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   35:        blk.11.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   36:             blk.11.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   37:             blk.11.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   38:          blk.12.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   39:           blk.12.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   40:           blk.12.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   41:             blk.12.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   42:           blk.12.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   43:             blk.12.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   44:        blk.12.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   45:             blk.12.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   46:             blk.12.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   47:          blk.13.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   48:           blk.13.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   49:           blk.13.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   50:             blk.13.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   51:           blk.13.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   52:             blk.13.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   53:        blk.13.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   54:             blk.13.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   55:             blk.13.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   56:          blk.14.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   57:           blk.14.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   58:           blk.14.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   59:             blk.14.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   60:           blk.14.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   61:             blk.14.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   62:        blk.14.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   63:             blk.14.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   64:             blk.14.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   65:          blk.15.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   66:           blk.15.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   67:           blk.15.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   68:             blk.15.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   69:           blk.15.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   70:             blk.15.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   71:        blk.15.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   72:             blk.15.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   73:             blk.15.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   74:          blk.16.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   75:           blk.16.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   76:           blk.16.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   77:             blk.16.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   78:           blk.16.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   79:             blk.16.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   80:        blk.16.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   81:             blk.16.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   82:             blk.16.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   83:          blk.17.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   84:           blk.17.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   85:           blk.17.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   86:             blk.17.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   87:           blk.17.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   88:             blk.17.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   89:        blk.17.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   90:             blk.17.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   91:             blk.17.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   92:          blk.18.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   93:           blk.18.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   94:           blk.18.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   95:             blk.18.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   96:           blk.18.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   97:             blk.18.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   98:        blk.18.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   99:             blk.18.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  100:             blk.18.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  101:          blk.19.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  102:           blk.19.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  103:           blk.19.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  104:             blk.19.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  105:           blk.19.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  106:             blk.19.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  107:        blk.19.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  108:             blk.19.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  109:             blk.19.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  110:           blk.2.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  111:            blk.2.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  112:            blk.2.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  113:              blk.2.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  114:            blk.2.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  115:              blk.2.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  116:         blk.2.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  117:              blk.2.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  118:              blk.2.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  119:          blk.20.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  120:           blk.20.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  121:           blk.20.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  122:             blk.20.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  123:           blk.20.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  124:             blk.20.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  125:        blk.20.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  126:             blk.20.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  127:             blk.20.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  128:          blk.21.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  129:           blk.21.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  130:           blk.21.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  131:             blk.21.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  132:           blk.21.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  133:             blk.21.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  134:        blk.21.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  135:             blk.21.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  136:             blk.21.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  137:           blk.3.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  138:            blk.3.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  139:            blk.3.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  140:              blk.3.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  141:            blk.3.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  142:              blk.3.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  143:         blk.3.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  144:              blk.3.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  145:              blk.3.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  146:           blk.4.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  147:            blk.4.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  148:            blk.4.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  149:              blk.4.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  150:            blk.4.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  151:              blk.4.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  152:         blk.4.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  153:              blk.4.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  154:              blk.4.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  155:           blk.5.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  156:            blk.5.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  157:            blk.5.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  158:              blk.5.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  159:            blk.5.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  160:              blk.5.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  161:         blk.5.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  162:              blk.5.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  163:              blk.5.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  164:           blk.6.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  165:            blk.6.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  166:            blk.6.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  167:              blk.6.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  168:            blk.6.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  169:              blk.6.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  170:         blk.6.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  171:              blk.6.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  172:              blk.6.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  173:           blk.7.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  174:            blk.7.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  175:            blk.7.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  176:              blk.7.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  177:            blk.7.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  178:              blk.7.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  179:         blk.7.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  180:              blk.7.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  181:              blk.7.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  182:           blk.8.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  183:            blk.8.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  184:            blk.8.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  185:              blk.8.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  186:            blk.8.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  187:              blk.8.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  188:         blk.8.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  189:              blk.8.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  190:              blk.8.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  191:           blk.9.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  192:            blk.9.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  193:            blk.9.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  194:              blk.9.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  195:            blk.9.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  196:              blk.9.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  197:         blk.9.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  198:              blk.9.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  199:              blk.9.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  200:               output_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_0:  155 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_layer          = 22
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 5632
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 1.10 B
llm_load_print_meta: model size       = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name   = models
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 2 '</s>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.07 MiB
llm_load_tensors: mem required  =  606.60 MiB
.......................................................................................
info: Microsoft.KernelMemory.Handlers.TextExtractionHandler[0]
      Handler 'extract' ready
info: Microsoft.KernelMemory.Handlers.TextPartitioningHandler[0]
      Handler 'partition' ready
info: Microsoft.KernelMemory.Handlers.SummarizationHandler[0]
      Handler 'summarize' ready
info: Microsoft.KernelMemory.Handlers.GenerateEmbeddingsHandler[0]
      Handler 'gen_embeddings' ready, 1 embedding generators
info: Microsoft.KernelMemory.Handlers.SaveRecordsHandler[0]
      Handler save_records ready, 1 vector storages
info: Microsoft.KernelMemory.Handlers.DeleteDocumentHandler[0]
      Handler 'private_delete_document' ready
info: Microsoft.KernelMemory.Handlers.DeleteIndexHandler[0]
      Handler 'private_delete_index' ready
info: Microsoft.KernelMemory.Handlers.DeleteGeneratedFilesHandler[0]
      Handler 'delete_generated_files' ready
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Queueing upload of 1 files for further processing [request tomato01]
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      File uploaded: content.txt, 5855 bytes
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'extract' processed pipeline 'default/tomato01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'partition' processed pipeline 'default/tomato01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'gen_embeddings' processed pipeline 'default/tomato01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Handler 'save_records' processed pipeline 'default/tomato01' successfully
info: Microsoft.KernelMemory.Pipeline.BaseOrchestrator[0]
      Pipeline 'default/tomato01' complete
Document indexed in 00:00:24.8906667
Question: What happened to the tomato disappeared on the International Space Station?
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =   88.00 MiB
llama_build_graph: non-view tensors processed: 510/510
llama_new_context_with_model: compute buffer total size = 279.07 MiB
vvdb-architecture commented 8 months ago

I also notice that the length of the document and the complexity of the query (in my example above) doesn't matter: even simplistic stuff just makes AskAsync hang. And, as mentioned, no CUDA seems to be used even though the correct .dll is loaded. Very strange.

dluc commented 7 months ago

I put a sample in a branch here https://github.com/microsoft/kernel-memory/tree/llamatest The performance really depends on the hardware available. Test with openchat_3.5.Q5_K_M.gguf: Apple M3: few seconds PC: 4 minutes

vvdb-architecture commented 7 months ago

This may be related to https://github.com/microsoft/kernel-memory/issues/266. In fact, it's only when I remove the CPU back-end from LLamaSharp that the system actually starts to use CUDA.

dluc commented 7 months ago

I think that's an issue that only LLamaSharp team can address. On this front, perhaps we'll add ollama and LM Studio support in future, to use a different way of using LLama and other models. Is that something that would work for you?

dluc commented 1 week ago

Update: KM v0.72 now includes an Ollama connector, making it extremely easier to work with local models.

Example here: https://github.com/microsoft/kernel-memory/blob/main/examples/212-dotnet-ollama/Program.cs

This should provide a workaround for the issue above.