Open nicho2 opened 8 months ago
I have the exact same problem. Im not able to define the max_tokens with these two GUIs really, at least i dont know where: [2024-03-19 16:31:15.988] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-03-19 16:31:15.988] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "You are a helpful assistant.", "role": "system" }, { "content": "Tell me a joke", "role": "assistant" }, { "content": "A man", "role": "user" }, { "content": "A man", "role": "assistant" }, { "content": "A man", "role": "user"
.....
{
"content": "A man",
"role": "assistant"
},
{
"content": "A man",
"role": "user"
}
], "model": "local", "max_tokens": null, "stream": false, "temperature": 0.1 }
Also Autogen v 0.0.54 and lm studio 0.2.17
let me share what I'm doing to solve the problem (not permanent solution)
Firstly, there is no GUI form for setting max_tokens for now. https://github.com/microsoft/autogen/issues/1608
So you need to edit samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json
to define max_tokens.
In the existing implementation, you need to update the parameter of llm_config
of LLMConfig
https://github.com/microsoft/autogen/blob/5b5727172ce20c80ddb2f9c9ce371800897e1007/samples/apps/autogen-studio/autogenstudio/datamodel.py#L95
So what you can do is to edit here, (Assuming you change the setting) https://github.com/microsoft/autogen/blob/5b5727172ce20c80ddb2f9c9ce371800897e1007/samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json#L224
config_list
optionmax_token
optionI attach my dbdefaults.json
.
Since I'm using Claude3 with Litellm, you have to change the definition of the model and llm_config accoding to your need.
Also I recommend you to delete the database.sqlite
you created before, when you change the dbdefaults.json
.
Or you can create new work space by using this option.
autogenstudio ui --reload --appdir /path/to/your/workspace
dbdefaults.json
For me, this happens with Autogen Studio 0.56 and all models (tried several mistral, mixtral, llama2)
let me share what I'm doing to solve the problem (not permanent solution)
Firstly, there is no GUI form for setting max_tokens for now. #1608
So you need to edit
samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json
to define max_tokens.In the existing implementation, you need to update the parameter of
llm_config
ofLLMConfig
So what you can do is to edit here, (Assuming you change the setting)
* delete `config_list` option * add `max_token` option
I attach my
dbdefaults.json
. Since I'm using Claude3 with Litellm, you have to change the definition of the model and llm_config accoding to your need.Also I recommend you to delete the
database.sqlite
you created before, when you change thedbdefaults.json
. Or you can create new work space by using this option.autogenstudio ui --reload --appdir /path/to/your/workspace
dbdefaults.json
Actually, to add to this: I think it is enough to change the datamodel.py. After you did that, the change will apply to all new agents you create. If you have already configured agents to use you local llm, nothing will make them work, as the max_token = null has already been assigned to them. So just delete them and create new agends.
Cudos to avonx for providing this solution, I was really about to bite in my keyboard because of this bug.
I seem to have this issue too, none of my local models seem to be working with a default install of LMstudio. It stops with only showing the first word of output from the LLM in the UX. Is this slated to be fixed?
i have the same problem. editing datamodel.py line 95 doesn't seem to fix it
@avonx I tried the propose solution of editing the dbdefaults.json file to add the line about max_tokens, also tried using the file you provided (thank you for that) but it did not resolve it for me.
@avonx I tried the propose solution of editing the dbdefaults.json file to add the line about max_tokens, also tried using the file you provided (thank you for that) but it did not resolve it for me.
Hi, the below steps Work for me: Edit the line 95 in datamodel.py : max_tokens: Optional[int] = 3000 Use the @avonx dbdefaults.json file Delete files folder (if there is no any important data) and database.sqlite
I just got stumped by the same issue. Setting the max_tokens helps me too. Shouldn't that also work with -1 as the normal default? Should the UI allow specifying such things?
My workaround was just to update in database. (Ofc then you need to re-pick agent in workflow or update) I also started new chat and it solved the issue
Describe the bug
When max_tokens parameter is None, the agent send a frame /v1/chat/completions with max_tokens: null. In this case the LLM don't understand and and stop after the second token.
Steps to reproduce
autogenstudio 0.0.54 + lmstudio + mistral
Model Used
Mistral-7B-Instruct-v0.2
Expected Behavior
no sending max_tokens parameter if None
Screenshots and logs
[2024-03-18 09:55:28.532] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-03-18 09:55:28.533] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "Tu es un expert dans la communication avec ............ ####\n\n ", "role": "system" }, { "content": "je cherche la liste des produits du réseau", "role": "user" } ], "model": "local_lmstudio", "max_tokens": null, "stream": false, "temperature": 0.1 } [2024-03-18 09:55:28.533] [INFO] [LM STUDIO SERVER] Context Overflow Policy is: Rolling Window [2024-03-18 09:55:28.533] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: 'je cherche la liste des produits du réseau' } (total messages = 2) [2024-03-18 09:55:30.909] [INFO] [LM STUDIO SERVER] Accumulating tokens ... (stream = false) [2024-03-18 09:55:30.909] [INFO] Accumulated 1 tokens: To [2024-03-18 09:55:30.989] [INFO] Accumulated 2 tokens: To find [2024-03-18 09:55:31.059] [INFO] [LM STUDIO SERVER] Generated prediction: { "id": "chatcmpl-rbmlwf91blp1nfv6114if6", "object": "chat.completion", "created": 1710752128, "model": "/home/system/.cache/lm-studio/models/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": " To find" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 15271, "completion_tokens": 2, "total_tokens": 15273 } }
Additional Information
AutoGen Studio CLI version: 0.0.54 autogenstudio==0.0.54 pyautogen==0.2.19