crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.94k stars 2.6k forks source link

Implement "Continue". Final output left truncated and unfinished because LLM max_token was reached. #504

Closed MinhNgyuen closed 2 weeks ago

MinhNgyuen commented 4 months ago

Right now if you give an Agent a task to write a long report and a lot of context, it will run out of output_tokens and stop midway through its response. There is no option to have the LLM continue generating the report in the case the LLM ran out of output tokens.

Similar to the ChatGPT web ui, add an option to continue output generation and finalize the output.

Here's some example code of how its done with OpenAI

def openai_write_long_response(prompt: str, context: str) -> str:
    client = OpenAI()
    completions = []
    messages = [
        {
            "role": "user",
            "content": f"Below is some relevant information that you can use to craft your report\n\n{context}",
        },
        {
            "role": "user",
            "content": prompt,
        },
    ]

    completion = client.chat.completions.create(
        model="gpt-4-turbo",
        messages=messages,
    )
    completions.append(completion)
    output = completion.choices[0].message.content
    while completion.choices[0].finish_reason == "length":
        messages += [
            {
                "role": "system",
                "content": "Message was truncated. Please continue.",
            },
            {
                "role": "assistant",
                "content": output,
            },
        ]
        completion = client.chat.completions.create(
            model="gpt-4-turbo",
            messages=messages,
        )
        completions.append(completion)
        output += completion.choices[0].message.content + " "

    return output
MinhNgyuen commented 4 months ago

If you could point to where in the code we could implement this I can take a stab at adding this functionality.

github-actions[bot] commented 2 weeks ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 2 weeks ago

This issue was closed because it has been stalled for 5 days with no activity.