Closed teufortressIndustries closed 3 months ago
Hi, @teufortressIndustries.
So, I'm currently working on revamping how we handle command settings because the current setup is a total mess. Everything's stored in this config.ini
file, and it's getting out of control fast. I'm trying to come up with a better way to manage all this, making it super customizable for each command and allowing user to create as many commands as he wants. I've got a YAML sample of what I'm thinking:
commands:
- name: '!solly'
type: quick-query
provider: open-ai
model: gpt-3.5-turbo-1106
model_settings:
max_tokens: 60
chat_settings:
# This should resolve the issue.
# It stores the soldier's (prompts/soldier.txt) prompt in the system's role message.
prompt-file: soldier
message-suffix: 'End your response with "I love buckets btw."'
enable-soft-limit: false
allow-prompt-overwrite: false
allow-long: false
traits:
- admin-only
- openai-moderated
- empty-prompt-message-response:
- "Hello there! I'm ChatGPT, integrated into Team Fortress 2. Ask me anything!"
There are some things that need to be considered to deem this feature complete. I haven't decided what to do with chats because there is 1 global chat and 1 private chat for each user.
Anyway, if you want to experiment with it, the available traits and settings are in the modules.builder
module.
Hello, @dborodin836. This is nice, giving people bricks instead of fully built houses makes it more flexible, requiring fewer updates for each desired feature. I'll try the new branch.
Hmm, I tried the built-in !solly command, but I changed it a bit. Instead of OpenAI, I used Groq, and removed the admin-only trait. However, sometimes it responds like a generic AI instead of acting like a soldier. And its responses are way more than the set max_tokens: 60
and changed it to global-chat (which is the reason why it doesn't work as you said you haven't decided anything with it)
quick-query
was the only one working. I've (hopefully) fixed global and private chats, as well as the !clear
command. But I've barely done any testing yet. 🤞
I haven't closely tested them but command-global
seems to work fine, the AI coherently responded like Soldier.
But when I changed the empty-prompt-message-response
trait to something Soldier would say and tried the other !heavy command (which I left unedited) the AI yelled "MAGGOTS!" at the beginning of the response for some reason, maybe it wasn't the empty-prompt-message-response
but the problem with the chat history or the model I was using (llama3-70b-8192).
I didn't manage to find what caused that, but I did find a few other bugs. There's still a possibility that it was just a coincidence, but that's weird nonetheless. I will check that a bit more deeply when the documentation is finished.
The Groq's speed is impressive, the responses are instant as if all of them were pre-made. But it lacks personality. While we have
CUSTOM_PROMPT
that addresses this issue, it would be nice to utilize the system role to determine its behavior. I believe it's more efficient than inputting[user message] (act like Soldier from the 2007 hit game Team Fortress 2. Soldier talks like blah blah blah... He likes buckets and blah blah blah...)
every single time.