microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
34.67k stars 5.01k forks source link

[Bug]: [autogenstudio] agent llm send max_tokens: null #2050

Open nicho2 opened 8 months ago

nicho2 commented 8 months ago

Describe the bug

When max_tokens parameter is None, the agent send a frame /v1/chat/completions with max_tokens: null. In this case the LLM don't understand and and stop after the second token.

Steps to reproduce

autogenstudio 0.0.54 + lmstudio + mistral

Model Used

Mistral-7B-Instruct-v0.2

Expected Behavior

no sending max_tokens parameter if None

Screenshots and logs

[2024-03-18 09:55:28.532] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-03-18 09:55:28.533] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "Tu es un expert dans la communication avec ............ ####\n\n ", "role": "system" }, { "content": "je cherche la liste des produits du réseau", "role": "user" } ], "model": "local_lmstudio", "max_tokens": null, "stream": false, "temperature": 0.1 } [2024-03-18 09:55:28.533] [INFO] [LM STUDIO SERVER] Context Overflow Policy is: Rolling Window [2024-03-18 09:55:28.533] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: 'je cherche la liste des produits du réseau' } (total messages = 2) [2024-03-18 09:55:30.909] [INFO] [LM STUDIO SERVER] Accumulating tokens ... (stream = false) [2024-03-18 09:55:30.909] [INFO] Accumulated 1 tokens: To [2024-03-18 09:55:30.989] [INFO] Accumulated 2 tokens: To find [2024-03-18 09:55:31.059] [INFO] [LM STUDIO SERVER] Generated prediction: { "id": "chatcmpl-rbmlwf91blp1nfv6114if6", "object": "chat.completion", "created": 1710752128, "model": "/home/system/.cache/lm-studio/models/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": " To find" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 15271, "completion_tokens": 2, "total_tokens": 15273 } }

Additional Information

AutoGen Studio CLI version: 0.0.54 autogenstudio==0.0.54 pyautogen==0.2.19

DavidBaurCodes commented 8 months ago

I have the exact same problem. Im not able to define the max_tokens with these two GUIs really, at least i dont know where: [2024-03-19 16:31:15.988] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-03-19 16:31:15.988] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "You are a helpful assistant.", "role": "system" }, { "content": "Tell me a joke", "role": "assistant" }, { "content": "A man", "role": "user" }, { "content": "A man", "role": "assistant" }, { "content": "A man", "role": "user"

.....

{
  "content": "A man",
  "role": "assistant"
},
{
  "content": "A man",
  "role": "user"
}

], "model": "local", "max_tokens": null, "stream": false, "temperature": 0.1 }

Also Autogen v 0.0.54 and lm studio 0.2.17

avonx commented 8 months ago

let me share what I'm doing to solve the problem (not permanent solution)

Firstly, there is no GUI form for setting max_tokens for now. https://github.com/microsoft/autogen/issues/1608

So you need to edit samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json to define max_tokens.

In the existing implementation, you need to update the parameter of llm_config of LLMConfig https://github.com/microsoft/autogen/blob/5b5727172ce20c80ddb2f9c9ce371800897e1007/samples/apps/autogen-studio/autogenstudio/datamodel.py#L95

So what you can do is to edit here, (Assuming you change the setting) https://github.com/microsoft/autogen/blob/5b5727172ce20c80ddb2f9c9ce371800897e1007/samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json#L224

I attach my dbdefaults.json . Since I'm using Claude3 with Litellm, you have to change the definition of the model and llm_config accoding to your need.

Also I recommend you to delete the database.sqlite you created before, when you change the dbdefaults.json. Or you can create new work space by using this option. autogenstudio ui --reload --appdir /path/to/your/workspace dbdefaults.json

PrinzMegahertz commented 7 months ago

For me, this happens with Autogen Studio 0.56 and all models (tried several mistral, mixtral, llama2)

PrinzMegahertz commented 7 months ago

let me share what I'm doing to solve the problem (not permanent solution)

Firstly, there is no GUI form for setting max_tokens for now. #1608

So you need to edit samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json to define max_tokens.

In the existing implementation, you need to update the parameter of llm_config of LLMConfig

https://github.com/microsoft/autogen/blob/5b5727172ce20c80ddb2f9c9ce371800897e1007/samples/apps/autogen-studio/autogenstudio/datamodel.py#L95

So what you can do is to edit here, (Assuming you change the setting)

https://github.com/microsoft/autogen/blob/5b5727172ce20c80ddb2f9c9ce371800897e1007/samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json#L224

* delete `config_list` option

* add `max_token` option

I attach my dbdefaults.json . Since I'm using Claude3 with Litellm, you have to change the definition of the model and llm_config accoding to your need.

Also I recommend you to delete the database.sqlite you created before, when you change the dbdefaults.json. Or you can create new work space by using this option. autogenstudio ui --reload --appdir /path/to/your/workspace dbdefaults.json

Actually, to add to this: I think it is enough to change the datamodel.py. After you did that, the change will apply to all new agents you create. If you have already configured agents to use you local llm, nothing will make them work, as the max_token = null has already been assigned to them. So just delete them and create new agends.

Cudos to avonx for providing this solution, I was really about to bite in my keyboard because of this bug.

lpingree commented 7 months ago

I seem to have this issue too, none of my local models seem to be working with a default install of LMstudio. It stops with only showing the first word of output from the LLM in the UX. Is this slated to be fixed?

christiandarkin commented 7 months ago

i have the same problem. editing datamodel.py line 95 doesn't seem to fix it

pressx2select commented 7 months ago

@avonx I tried the propose solution of editing the dbdefaults.json file to add the line about max_tokens, also tried using the file you provided (thank you for that) but it did not resolve it for me.

MMoneer commented 7 months ago

@avonx I tried the propose solution of editing the dbdefaults.json file to add the line about max_tokens, also tried using the file you provided (thank you for that) but it did not resolve it for me.

Hi, the below steps Work for me: Edit the line 95 in datamodel.py : max_tokens: Optional[int] = 3000 Use the @avonx dbdefaults.json file Delete files folder (if there is no any important data) and database.sqlite

Q-Dir_03042024_053
odrobnik commented 7 months ago

I just got stumped by the same issue. Setting the max_tokens helps me too. Shouldn't that also work with -1 as the normal default? Should the UI allow specifying such things?

waszak commented 6 months ago

My workaround was just to update in database. (Ofc then you need to re-pick agent in workflow or update) I also started new chat and it solved the issue

obraz obraz