chrisrude / oobabot

A Discord bot which talks to Large Language Model AIs running on oobabooga's text-generation-webui
MIT License
101 stars 33 forks source link

[BUG] Different Outputs Streaming (More Coherent) vs Single Message (Writes Transcript) #69

Open saphtea opened 1 year ago

saphtea commented 1 year ago

Hey! I've been modifying and playing around with oobabot formatting and even playing with the code a bit.

When I first started using Llama 2 with the "single message" option I almost always receive messages that try to continue on the conversation itself (talking to itsellf, predicting user responses).

When I switched over to Streaming I'm finally getting coherent and single message replies.

Haven't looked into the code because I've been working on this most of the day yesterday and this morning, but if I do I'll post more here if I find any differences that could be causing this.

Thank you for your time!

chrisrude commented 1 year ago

Awesome, thanks for the update on this!

It's interesting that there's a difference... I wonder if you switch back to "single message" mode whether you would continue to get the improved behavior or if it would regress.

The main difference might be that any of the settings which split the message into parts might give the bot more context around what a chat transcript should look like, so it's easier for it to get the gist of what we want it to generate.

Let me know if the other investigation turns up anything, and thanks for the help!

Mage-Enderman commented 1 year ago

How would I try this?

keninishna commented 11 months ago

I get different responses even though I have the parameters identical in config.yml for oobabot and in text-gen-webui. I tried adding the new min_p parameter to config.yml and it loads but I don't know if its working. The params are request_params: max_new_tokens: 4000 do_sample: true temperature: 1.6 top_p: 1 typical_p: 1 epsilon_cutoff: 0 eta_cutoff: 0 tfs: 1 top_a: 0 repetition_penalty: 1.18 min_p: 0.26 top_k: 20 min_length: 0 no_repeat_ngram_size: 0 num_beams: 1 penalty_alpha: 0 length_penalty: 1 early_stopping: false mirostat_mode: 0

In text-gen-webui I ask "how can I increase my power level past 9000?" and it gives me a list of things to do but in discord it just says "become one with the force" no matter what settings I set?

jmoney7823956789378 commented 11 months ago

could be a difference in prompting. The webui's selection and oobabot's preset system prompt are both very different.

keninishna commented 11 months ago

I am wondering that as well, does oobabot inherit the chat-instruction template from text-gen? the character context for text-gen and the oobabot personality are both "The following is a conversation with an AI Large Language Model. The AI has been trained to answer questions, provide recommendations, and help with decision making. The AI follows user requests. The AI thinks outside the box."

jmoney7823956789378 commented 11 months ago

Don't forget the instruction format, including tags like [INST] and <> if using a llama2-chat model. These aren't included by default in oobabot.

keninishna commented 11 months ago

I'm using dolphin-mixtral and the model card says this about the prompt format Prompt format: This model uses ChatML prompt format.

<|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant

So in the persona I have <|im_start|>system You are a discord AI bot. The AI follows instructions and is helpful. <|im_end|>

The model is still is wonky. right now it spams emojis with every reply. The temp is at 1 in the config but it doesn't seem to change anything?

AlanMW commented 8 months ago

I'm using dolphin-mixtral and the model card says this about the prompt format Prompt format: This model uses ChatML prompt format.

<|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant

So in the persona I have <|im_start|>system You are a discord AI bot. The AI follows instructions and is helpful. <|im_end|>

The model is still is wonky. right now it spams emojis with every reply. The temp is at 1 in the config but it doesn't seem to change anything?

I am having similar issues, did you ever find anything out?