Open uvnikgupta opened 3 months ago
@uvnikgupta: what LLM are you using? can you try to also instruct the LLM to wrap the value in double quotes?
@uvnikgupta: what LLM are you using? can you try to also instruct the LLM to wrap the value in double quotes?
I am using llama3 and mixtral. Can try to "request" the LLM to add the double quotes but I am concerned that it may not always work. I feel the better resolution would be to remove the literal_eval
. Is there any particular reason to keep it that way?
Asking the LLM to # Always enclose your response within ""
works for llama3 and GPT 3.5 but is a hit or miss for mixtral
The reason for keeping it is because like this the LLM can produce any primitive Python expression. It's useful to be able to allow the LLM to produce a list of strings for example. Or even a tuple. But it might make sense to enable a "fallback to string" if the literal_eval
fails.
The return statement of
generate_value()
isreturn literal_eval(value)
raises an exception if the value is other than a number. Modifying it toreturn value
resolves the issue. Is this an expected behavior? If yes, can someone please tell how to extract string values from the user utterances.I am using the following colang flow:
and my prompt content is :