microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
30.78k stars 4.49k forks source link

[Issue]: Why the agent gives the same reply for same prompt with temperature 0.9? #2377

Closed harrywang closed 5 months ago

harrywang commented 5 months ago

Describe the issue

AutoGen novice here.

I had the following simple code, but every time I run, the joke it returns is always the same.

This is not right - any idea why this is happening? Thanks!

import os
from dotenv import load_dotenv

load_dotenv()  # take environment variables from .env.

from autogen import ConversableAgent

llm_config={"config_list": [{"model": "gpt-4-turbo", "temperature": 0.9, "api_key": os.environ.get("OPENAI_API_KEY")}]}

agent = ConversableAgent(
    "chatbot",
    llm_config=llm_config,
    code_execution_config=False,  # Turn off code execution, by default it is off.
    function_map=None,  # No registered functions, by default it is None.
    human_input_mode="NEVER",  # Never ask for human input.
)

reply = agent.generate_reply(messages=[{"content": "Tell me a joke", "role": "user"}])
print(reply)

The reply is always the following:

Why don't skeletons fight each other? They don't have the guts.
SaturnCassini commented 5 months ago

try changing the cache seed for example https://github.com/SaturnSeries/saturn.chat/blob/15c83225cbb3f3f8c9d22b6552036ef001da734a/simple_universe.py#L185

harrywang commented 5 months ago

@SaturnCassini Thanks a lot.

I changed the seed and run 10 times - most of the reply is still the same:

import os
import random
from dotenv import load_dotenv

load_dotenv()  # take environment variables from .env.
from autogen import ConversableAgent

# run multiple times
for i in range(10):
    print(random.randint(0, 9999999999999999))
    llm_config={"config_list": [{
        "model": "gpt-4-turbo",
        "cache_seed": random.randint(0, 9999999999999999),
        "temperature": 0.9, 
        "api_key": os.environ.get("OPENAI_API_KEY")}]}

    agent = ConversableAgent(
        "chatbot",
        llm_config=llm_config,
        code_execution_config=False,  # Turn off code execution, by default it is off.
        function_map=None,  # No registered functions, by default it is None.
        human_input_mode="NEVER",  # Never ask for human input.
    )

    reply = agent.generate_reply(messages=[{"content": "Tell me a joke", "role": "user"}])
    print(reply)

output:

3212715554126343
Why don't skeletons fight each other? They don't have the guts.
5511361628170675
Why don't skeletons fight each other? They don't have the guts.
7438643446558568
Why don’t skeletons fight each other? They don't have the guts.
9323353449419361
Why don't skeletons fight each other? They don't have the guts.
8011209970030825
Why don't skeletons fight each other? They don't have the guts.
4219964513952237
Sure, here's one for you:

Why don't skeletons fight each other?

They don't have the guts!
8096302230799109
Why don't skeletons fight each other? They don't have the guts.
6606078897329995
Why don't skeletons fight each other? They don't have the guts.
4099472369462176
Why don't skeletons fight each other?

They don't have the guts.
7306887790716329
Why don't skeletons fight each other? They don't have the guts.
harrywang commented 5 months ago

I also found from official docs https://microsoft.github.io/autogen/docs/topics/llm_configuration:

Screenshot 2024-04-13 at 10 09 20 AM
harrywang commented 5 months ago

there is a .cache folder, which can be deleted - that does not help much either.