zou-group / textgrad

Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
http://textgrad.com/
MIT License
944 stars 67 forks source link

[HELP NEEDED]How to print eval_model's opinion of improving prompt in the Prompt Optimization Example #28

Closed IcyFeather233 closed 1 day ago

IcyFeather233 commented 1 week ago

In Optimizing a Code Snippet and Define a New Loss Example, I can use:

# Let's do the forward pass for the loss function.
loss = loss_fn(problem, code)
print(loss.value)

and

# Let's look at the gradients!
loss.backward()
print(code.gradients)

to see the prompt's evolution reasons.

But in Prompt Optimization Example, I don't know how to modify. I want to see some judgements like "Your step-by-step reasoning is clear and logical, but it contains a critical flaw in the assumption that drying time is directly proportional to the number of shirts. [...]" "your prompt should be improved in ..."

vinid commented 1 week ago

Hello!

You can print system_prompt.gradients to see the gradients of the system prompt.

For the the loss you can instead print each of the single losses (e.g., losses[0]).

You should take a look at the eval_output_variable if you are interested in the evaluation.

Note that, we wrapped the process in various subclasses, so if you want to better understand what we are doing behind the scenes you should take a look at how we create the eval_fn and the MultiFieldTokenParsedEvaluation.

Hope this helps :)