Open Shad0wSeven opened 3 months ago
You can try reducing the batch size.
In optimizer.py, the prompt_update_parameter in line 177 has long feedback for input prompts that are long or short. Even though the system prompt + prompt_update_parameter token length is below gpt-4 max tokens, this error comes up. However, reducing the batch size to 1 or 2 is working. Tested with evaluator engine = gpt-4o, gpt-4.
I am running into the same issue with both FormattedLLMCall's and BlackboxLLM. I am using very large variables for context 20000 tokens. reducing the context has gotten the optimizer to run but doing so makes my loss very high. I hope there's a way around this i'm looking to optimize only the prompt not the context.
I keep getting this error
whenever using longer prompts or differnet types of data, following prompt optimization, is ther any fix? Running GPT-4o