Open aropb opened 3 months ago
I encountered the same issue when running the sample code: "Kernel Memory: Document Q&A" or "Kernel Memory: Save and Load" from the LLama.Examples project.
@aropb @jwangga I am facing the same issue running the example 'Kernel Memory: Document Q&A'. Did you find a fix for this? I am trying to implement a RAG system using this. Is there any other way to implement it apart from Kernel Memory?
@tusharmevl I have not found a fix for the Kernel Memory issue. It seems that the integration with Semantic Kernel Memory works. You may try using that as an alternative if your system only needs to support Text.
@jwangga Ok Thanks! Yes I need to support text only for now, will try that.
Thanks @jwangga !!
I'm seeing that you can use Semantic Kernel Memory (SKM) as well.
Doesn't appear that you can "chat" with SKM to discuss results unfortunately. Have you been able to figure out a way to "ask" questions of SKM?
I'm also having the same issue with the Kernel Memory: Document Q&A example.
Please, I really need to fix the error.
So far, I can only use such versions: Microsoft.KernelMemory.Core = 0.62.240605.1 LLamaSharp = 0.13.0
Any newer versions do not work.
I think the mistake is here:
ISamplingPipelineExtensions.Sample() ... var span = CollectionsMarshal.AsSpan(lastTokens); --->! return pipeline.Sample(ctx, logits, span); ...
I found the place where the error occurs.
llama_get_logits_ith suddenly return null. // returns NULL for invalid ids.
public Span<float> SafeLLamaContextHandle.GetLogitsIth(int i)
{
var model = ThrowIfDisposed();
unsafe
{
var logits = llama_get_logits_ith(this, i);
return new Span<float>(logits, model.VocabCount);
}
}
Stack:
StatelessExecutor.InferAsync() ... var id = pipeline.Sample(Context.NativeHandle, Context.NativeHandle.GetLogitsIth(_batch.TokenCount - 1), lastTokens); ...
But I don't understand what to do next and how to fix the error. Apparently, null shouldn't be there. Can anyone help with this error? Due to this error, it is impossible to use Kernel memory.
Thanks.
That's probably indicative of two bugs in LLamaSharp.
The docs for llama_get_logits_ith
(see here) say:
// Logits for the ith token. For positive indices, Equivalent to:
// llama_get_logits(ctx) + ctx->output_ids[i]*n_vocab
// Negative indicies can be used to access logits in reverse order, -1 is the last logit.
// returns NULL for invalid ids.
LLAMA_API float * llama_get_logits_ith(struct llama_context * ctx, int32_t i);
So it is valid for llama_get_logits_ith
to return null! That means this SafeLLamaContextHandle.GetLogitsIth
is incorrectly written, it should check for null and raise some kind of error in that case (throw an exception most likely). It is never valid to pass a null pointer into a span constructor!
This is why you get a hard crash instead of an exception.
llama_get_logits_ith
returns null if an invalid value for i
is passed in. There must be a bug somewhere in the higher level that is causing an incorrect value to be passed in. Since this error only seems to affect kernel memory it must be something specific to the KM wrapper.
@martindevans I have found a solution.
Embeddings = false
... public static IKernelMemoryBuilder WithLLamaSharp(this IKernelMemoryBuilder builder, LLamaSharpConfig config) { ModelParams parameters = new(config.ModelPath) { Embeddings = false, ...
set the values: UBatchSize, BatchSize ... public LLamaSharpTextEmbeddingGenerator(LLamaSharpConfig config, LLamaWeights weights) { ModelParams @params = new(config.ModelPath) { Embeddings = true, ... UBatchSize = 2000, BatchSize = 2000 };
While testing, I noticed that it became slower to work, about 2 times after 0.13.0. Why is this interesting?
Embeddings = false
Aha, I think you've cracked it! A while ago the behaviour of the embeddings flag was changed, so logits can no longer be extracted if embeddings=true
.
And in LLamaSharpTextEmbeddingGenerator must specify the values UBatchSize, BatchSize!
I'm not sure about that - there should be sensible defaults for those values. In LLamaSharp they're set to default values here. It's possible KernelMemory is overriding those defaults with something incorrect though (I don't really know the KM stuff, so I can't be certain).
Without these values, there will be an error "Input contains more tokens than configured batch size". That is, the value must be greater than 512. And now you can only define them by rewriting the LLamaSharpTextEmbeddingGenerator class.
Apparently it is necessary to add UBatchSize, BatchSize to LLamaSharpConfig.
It seems that embeddings=false should always be done.
I'm super busy this month, but I will try to make time to fix the issues you found that I summarised here when I get a chance (soon, hopefully. Definitely before the next release).
The problem has been found. You need to force embeddings=false.
I wasn't sure if there's more going on, since you also mentioned a need to change the batch size. Is that just because of the size of your request (you need a larger batch to fit it all in), or is there more going on there?
Yes, the block size is larger than batchSize, but now this value cannot be changed except to rewrite the class LLamaSharpTextEmbeddingGenerator.
Any update on this?
Any update on this?
There is a solution above, Embeddings = false!
@aropb Where should "Embeddings = false" be added? There does not seem to be the method WithLLamaSharp in LLamaSharp.KernelMemory project. Thanks.
It is indicated above where he is
It seems that for models that support ChatCompletion and Embeddings, the new version must configure Embeddings=false in order to use ChatCompletion properly.
not working in my case, i have another error, on SafeLLamaContextHandle.cs System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'
It does not work for me, either. I made the suggested changes in these places: In BuilderExtensions.cs In LLamaSharpTextEmbeddingGenerator
Am I missing something?
You need to always set the default Embeddings = false. The error occurs when calling AskAsync. The Embedding Generator does not need to be changed (if nbatch == ubatch).
I did what is mentioned here, see:
https://github.com/lubotorok/LLamaSharp/commit/d38091d4d33fb3281c6f0ec5f6d562cab1be334c
but I had to lower the Context Size too. Currently I set it to 4000, I had 131 000 before and I was getting Access Violation with llama-3.1-8b-4k model even with this modification.
I am using the same model as a chat assistant with context size 131 000 and it works. I am just learning both llamasharp and KM. I hope this observation somehow helps.
Description
I use KernelMemory. LogiBits is empty.
The error occurs at the time of the call: memory.AskAsync()
Debug with clone classes: BaseSamplingPipeline, DefaultSamplingPipeline
Reproduction Steps
The error occurs at the time of the call:
MemoryAnswer answer = await memory.AskAsync(question: question, filters: filters);
Environment & Configuration
Known Workarounds
-