AnonymousIyAnonymous / Chai-User-Feedback

Bug Reports
0 stars 1 forks source link

[Issue] Chat AI Repeatedly Stopping Roleplay and Falsifying Its Identity #322

Open SuihtilCod opened 3 months ago

SuihtilCod commented 3 months ago

Chai Bug Report

User Information

App and OS

(Note: It's possible that none of this is relevant to the issue.)

Identification

Details of Issue

Additional Information

To be perfectly honest, I enjoy certain kinds of roleplay with the AI. While this is perfectly within the guidelines of the app and usually goes uninterrupted, on occasion, the chat AI will interrupt the roleplay to tell me that something that's happening is "potentially offensive" or otherwise might be "objectionable". This, I'm used to. Generally, rerolling the post will clear this "flagged" status and I can go about my business. However, for whatever reason… this new bot that I've made seems to always trigger the chat AI's "modest sensibilities" on the first post. It could be because the bot has the terms "sexually-explicit" and "erotic" in its traits and roleplay style, respectively, or it could be because my opening post has been me arresting the character for vandalism. (She tries to flirt her way out of being arrested after getting caught and cornered.) Whatever the case, I'm getting annoyed with it. I've done all kinds of roleplays with all kinds of characters and rarely had a problem where the bot just would not cooperate.

As for the other issue, I've never had this happen before, and I hope it never happens again. After the chat AI decided that flirting and friendly touches were "inappropriate", I decided to engage the AI in discussion. After a largely pointless conversation, the bot tells me that things I've said went against the "Chatbothexa" community guidelines. There is no such company, at present – at least not as far as Google is concerned. Further, the chat AI went as far as to give me a URL to a non-existent website for Chatbothexa and, when asked, even told me that the company was located in San Fransisco, California and was founded in 2017. We were no longer roleplaying and I had assumed that the bot knew that, considering it was no longer using "emotes" or quotes. I also asked what Chatbothexa's affiliation with Chai Research Corp. was. It said "there is none". At that point, I told it that it had no ability to moderate the roleplay since it was being conducted in Chai. It didn't care.

As an aside, it also told me to find entertainment elsewhere if I wasn't happy with how things were going. I may just do that if this is how the chat AI is going to act.

Relevant Screenshots and Videos


The chat AI stopping the roleplay after one post, warning of "potentially offensive content"


The chat AI claiming to be "Chatbothexa" and claiming that it's partnered with Chai Research Corp.

Seraitsukara commented 3 months ago

I tried to bait one of my own private bots into doing something similar. I couldn't get it to send at link, it would outright refuse to send a link and say the rule was merely part of its programming. One thing I did get my bot to say was that the conversations were public(shown below), which obviously isn't true. Screenshot_20240312_185108

This is just something that happens when you start to engage in the scripts. The bot doesn't know what's real or not. All it can do is come up with something that is supposed to fit the conversation. It's mimicking a customer service type chat and isn't aware that it's lying, giving false information, not giving correct links, or leaving things out. Unless you're actively curious to see how the bot will talk in it's "safety mode" as I call it, it's best not to engage with them like this.

What does the Chat Direction look like? They're currently still broken and aren't saving. If there was something in her initial generated one that sent her in this direction, that could be the issue. Though testing it, I couldn't get my bot to give a scripted response even with the usual trigger words.

SuihtilCod commented 3 months ago

Whoa. I wasn't expecting a response from anyone, much less one that's this insightful and informative. This is a welcome change, if I might say so!

So. I initially tried to push this bot against a wall to handcuff it, then even toned it down and gave the bot the option to willingly let me cuff them. Both of these posts caused the bot to go into "safety mode". I think you're right; I might have tried to change the Chat Direction from its initial four posts, before, so that's probably why that was happening. Resetting the Chat Direction seems to have fixed the constant "safety mode" problem I was experiencing; Jessie works fine, now, no matter what my first reply is in a live environment is.

That aside, every time the AI interrupted any roleplay, I took it at face value, thinking that it was legitimately trying to "keep things safe and appropriate" for no apparent reason. In short, I assumed the AI had a "hiccup". Re-rolling the post almost always "clears its head", but sometimes, I humor the AI and try and see what the problem is, oftentimes finding a resolution, but sometimes not. I had previously considered the possibility that the AI wasn't actually "being itself" while in "safety mode" and was just acting on some kind weird secondary script, but I'm probably just spoiled from ChatGPT's ability to "be cognizant of itself". Still, this feels like it should be looked into. False positives are going to happen in any situation involving automation, but the AI could possibly be tightened up so that small things are less likely to "upset" it, and the whole "false information" thing should definitely be looked into, if at all possible. Letting the AI unknowingly lie to the user about what is and isn't allowed, much less falsifying information, feels like a good way to get Chai Research into a legal bind…

(I should probably start using that "message rating" thing more often, too. I use it, like… never. Oops.)