Closed Mimisss closed 2 weeks ago
some times the model could not be that smart like i tried with llama 70b it only work with combobox and the others it didn't work so make sure that awan llm is that smart to do that action
Like I said, the model responds successfully and in a OpenAI-compatible way, for that matter.
So, in the case of Smart TextArea, when I type in "The sky is blue because " , I expect it to be filled in by " of the way light interacts with the Earth's atmosphere.", which is the text returned by the model:
{"id":"cmpl-55bae8a9134f444da677b732aa555192","object":"text_completion","created":1717274069,"model":"Awanllm-Llama-3-8B-Dolfin","choices":[{"index":0,"text":"* of the way light interacts with the Earth's atmosphere. It seems like this would be a straightforward question, but it actually has taken scientists quite some time to figure out the exact answer. Scientists have studied this topic extensively and come up with several theories","logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":8,"total_tokens":58,"completion_tokens":50}}
I don't think this has anything to do with the model's abilities. Probably, it's how GetChatResponseAsync
gets called and the format of the result string, I guess.
Isn't it?
Hello, anyone else here?
Abandoned project?
So, in the case of Smart TextArea, when I type in "The sky is blue because " , I expect it to be filled in by " of the way light interacts with the Earth's atmosphere.", which is the text returned by the model:
SmartTextArea doesn't just insert the returned text directly. It needs the response to be of the form [OK:suggestion]
where suggestion
is the text to insert. This format is to confirm the response is actually an insertion suggestion and not some other message like "I'm sorry I don't know what to do" or "Error: invalid key" or similar.
If you want to observe this for yourself, try subclassing SmartTextAreaInference
and use the debugger to observe the returned data format when overriding GetInsertionSuggestionAsync
and calling the base implementation. You can make your own subclass of SmartTextAreaInference
or IInferenceBackend
that converts the model's returned text to the required format.
So just to clarify, your existing code might work if you updated it to return:
return $"[OK:{responseContent}]";
Can you please provide a sample of custom implementation of
IInferenceBackend
?I want to use Awan LLM and here's what I've tried so far:
Over simplified maybe but the request is successful and I do get a response, for example if the prompt is "The sky is blue because ":
but nothing appears in the smart textarea:
What am I missing?