neulab / prompt2model

prompt2model - Generate Deployable Models from Natural Language Instructions
Apache License 2.0
1.96k stars 177 forks source link

Inconsistent behaviour in prompt parser #334

Closed saum7800 closed 1 year ago

saum7800 commented 1 year ago

In the PromptParser, if we reach max_api_calls because of API errors, we are raising a ValueError but if we reach max_api_calls because of unsuccessful extraction, we are logging a warning and returning None. We should have consistent behaviour if we cant parse a prompt (I think the right response is to raise a ValueError in both cases).

Should be a very small change, once we decide what the behaviour should be. Any thoughts on what the behaviour should be @neubig / @viswavi ?

neubig commented 1 year ago

Hi @saum7800, I think that in general we should try to raise exceptions when something broke, so raising a ValueError (or actually maybe RuntimeError is better) seems to be preferable to me.