dotnet-smartcomponents / smartcomponents

Experimental, end-to-end AI features for .NET apps
622 stars 54 forks source link

SmartTextArea: Sample of custom implementation of IInferenceBackend? #59

Closed Mimisss closed 2 weeks ago

Mimisss commented 1 month ago

Can you please provide a sample of custom implementation of IInferenceBackend ?

I want to use Awan LLM and here's what I've tried so far:

public async Task<string> GetChatResponseAsync(ChatParameters options)
{
    var apiUrl = configuration["SmartComponents:Endpoint"] + "/completions";

    using (var client = new HttpClient())
    {
        string message = options.Messages[options.Messages.Count - 1].Text;
        message = message.Split("\n")[1].Substring(10);

        var requestData = new
        {
            model = configuration["SmartComponents:DeploymentName"],
            prompt = message,
            max_tokens = 50,
            temperature = 0.7,
            stop = new[] { "\n" }
        };

        var jsonRequest = JsonConvert.SerializeObject(requestData);

        var content = new StringContent(jsonRequest, Encoding.UTF8, "application/json");

        client.Timeout = TimeSpan.FromSeconds(30);

        client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(
            "Bearer", 
            configuration["SmartComponents:ApiKey"]);

        HttpResponseMessage response = await client.PostAsync(apiUrl, content);

        if (response.IsSuccessStatusCode)
        {
            var responseContent = await response.Content.ReadAsStringAsync();

            return responseContent;
        }
        else
        {
            return string.Empty;
        }
    }               
}

Over simplified maybe but the request is successful and I do get a response, for example if the prompt is "The sky is blue because ":

{"id":"cmpl-55bae8a9134f444da677b732aa555192","object":"text_completion","created":1717274069,"model":"Awanllm-Llama-3-8B-Dolfin","choices":[{"index":0,"text":"* of the way light interacts with the Earth's atmosphere. It seems like this would be a straightforward question, but it actually has taken scientists quite some time to figure out the exact answer. Scientists have studied this topic extensively and come up with several theories","logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":8,"total_tokens":58,"completion_tokens":50}}

but nothing appears in the smart textarea:

<smart-textarea user-role="@userRole" row="40" cols="100" placeholder="Type here..." />

What am I missing?

wisamidris7 commented 1 month ago

some times the model could not be that smart like i tried with llama 70b it only work with combobox and the others it didn't work so make sure that awan llm is that smart to do that action

Mimisss commented 1 month ago

Like I said, the model responds successfully and in a OpenAI-compatible way, for that matter.

So, in the case of Smart TextArea, when I type in "The sky is blue because " , I expect it to be filled in by " of the way light interacts with the Earth's atmosphere.", which is the text returned by the model:

{"id":"cmpl-55bae8a9134f444da677b732aa555192","object":"text_completion","created":1717274069,"model":"Awanllm-Llama-3-8B-Dolfin","choices":[{"index":0,"text":"* of the way light interacts with the Earth's atmosphere. It seems like this would be a straightforward question, but it actually has taken scientists quite some time to figure out the exact answer. Scientists have studied this topic extensively and come up with several theories","logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":8,"total_tokens":58,"completion_tokens":50}}

I don't think this has anything to do with the model's abilities. Probably, it's how GetChatResponseAsync gets called and the format of the result string, I guess.

Isn't it?

Mimisss commented 3 weeks ago

Hello, anyone else here?

Abandoned project?

SteveSandersonMS commented 2 weeks ago

So, in the case of Smart TextArea, when I type in "The sky is blue because " , I expect it to be filled in by " of the way light interacts with the Earth's atmosphere.", which is the text returned by the model:

SmartTextArea doesn't just insert the returned text directly. It needs the response to be of the form [OK:suggestion] where suggestion is the text to insert. This format is to confirm the response is actually an insertion suggestion and not some other message like "I'm sorry I don't know what to do" or "Error: invalid key" or similar.

If you want to observe this for yourself, try subclassing SmartTextAreaInference and use the debugger to observe the returned data format when overriding GetInsertionSuggestionAsync and calling the base implementation. You can make your own subclass of SmartTextAreaInference or IInferenceBackend that converts the model's returned text to the required format.

SteveSandersonMS commented 2 weeks ago

So just to clarify, your existing code might work if you updated it to return:

return $"[OK:{responseContent}]";