mistralai / client-js

JS Client library for Mistral AI platform
Apache License 2.0
175 stars 43 forks source link

Function calling #53

Open savikko opened 4 months ago

savikko commented 4 months ago

Hi!

What is the way to call (Mistral Large) functions with this client?

I did not find even any mentions about functions on API documentation (https://docs.mistral.ai/api/#operation/createChatCompletion)

jfhenon commented 4 months ago

Hi!

What is the way to call (Mistral Large) functions with this client?

I did not find even any mentions about functions on API documentation (https://docs.mistral.ai/api/#operation/createChatCompletion)

+1

savikko commented 4 months ago

And regarding Azure Mistral Large, they say this:

Does Mistral Large on Azure support function calling and Json output?

The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.

thibautvdu commented 4 months ago

Anyone using La Plateforme with javascript successfully tried function calling ?

According to the typescript definitions of this package at the moment, the function calling would be only available with streaming (client.chatStream), but I cannot get it to work either way (streaming or async).

Using mistral-large-latest, it seems that the model is aware fo which function it should call, but tool_calls is always empty and the model output "[Function Call: 'get-cart']" straight into the content.

Request call :

      const completions = this.mistralClient.chatStream({
        model: targetModel,
        messages: [this.getPrompt(), ...messages],
        tools: this.getTools(),
        responseFormat: {
          type: "text" as ResponseFormats,
        },
      });

      let content = "";
      for await (const completionChunk of completions) {
        const choice = completionChunk.choices[0];
        content += choice.delta.content || "";
        if (choice.delta.tool_calls) {
          toolCalls = choice.delta.tool_calls;
        }
      }

Answer :

dashboard:dev: Received completion {
dashboard:dev:   content: `{"answer": "Bien sûr, laissez-moi vous montrer le contenu de votre panier. 💰›’\\n\\n[Function Call: 'get-cart']", "thoughts": "I need to call the 'get-cart' function to retrieve the current items in the user's cart."}`,
dashboard:dev:   toolCalls: undefined
dashboard:dev: }
lucas-vitrus commented 3 months ago

+1

Same issue, no tools on the javascript client.

johannbuscail commented 3 months ago

Any update on this ?

briannezhad commented 3 months ago

+1

JamieLee0510 commented 2 months ago

Anyone using La Plateforme with javascript successfully tried function calling ?

According to the typescript definitions of this package at the moment, the function calling would be only available with streaming (client.chatStream), but I cannot get it to work either way (streaming or async).

Using mistral-large-latest, it seems that the model is aware fo which function it should call, but tool_calls is always empty and the model output "[Function Call: 'get-cart']" straight into the content.

Request call :

      const completions = this.mistralClient.chatStream({
        model: targetModel,
        messages: [this.getPrompt(), ...messages],
        tools: this.getTools(),
        responseFormat: {
          type: "text" as ResponseFormats,
        },
      });
      }

Maybe could try setting the toolChoice option to valueany? I could use function_call in my cases, especially by model mistral-large-latest

fuegoio commented 2 months ago

Hi everyone! Thanks all for your contribution and apologize for the delay of answer.

Function calling is properly working on the JS client, in normal and streaming mode. The latest release (0.2.0) should fix the typescript definitions.

We are aware of model issues (not related to this client) when having function calls after some text for example. We would appreciate a lot reproductible example of those issues. @thibautvdu @lucas-vitrus @briannezhad @johannbuscail do you have examples of prompt + tools that you give to the model where it answers badly?