Closed eclipt69 closed 1 year ago
As you observed, BlenderBot responses are based on its perception of that personality. It is true that it often exaggerates or hallucinates. This is somehow expected and is the current state of the challenge in having a sensible conversation AI agent.
Do you think I can lower it's "exaggeration" by messing around with the temperature?
Do you think I can lower it's "exaggeration" by messing around with the temperature?
Sounds like a logical hypothesis to me. But I cannot say so for certain. The best is to try it.
This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.
Description When I try to use a persona with bb2 it gives me weird outputs. When I don't use any personas it works fine.
Reproduction steps (code) (not the full code oc) I'm using this pieces of code I've found in other issues that seem to work with bb2:
opts
opt = {'safe_personas_only': False, 'memory_key': 'full_text', 'debug': True, 'skip_generation': False, 'include_personas': True, "knowledge_access_method": "classify", 'doc_chunk_split_mode': ' word ', 'temperature': 5, 'search_server': '127.0.0.1:8080'}
load blenderbot2
blender_agent = create_agent_from_model_file("zoo:blenderbot2/blenderbot2_400M/model", opt)
(...)
text for the model to observe
in this case, "your persona" is intended to be the bot's persona, as I don't want any user personas to load
turn = "\n".join(["your persona: My name is Jonh", "your persona: I am 20 years old"])
make the model witness the text
blender_agent.observe({'text': turn, 'episode_done': False})
Behavior Query: "hello there! How are you?"
The example's output is not that bad if you take the last sentence, but depending on the input, it can be way weirder Is there any way to fix this? Thanks 🙂