I changed the text field name to try to discourage the model from just quoting the input text, which it was tending to do. Untested!
I'm hoping that the lower temperature for JSON generation will reduce the chance of generation failures. It would be even better to use the openai function calling mechanism; basically we'd force the model to call a function with the desired schema, and it'll generate JSON of the right format by construction.
TODO
[ ] add tests!
[ ] Switch to json schema, example, and force a call to a fictitious show_reflections (or similar) function.
I changed the text field name to try to discourage the model from just quoting the input text, which it was tending to do. Untested!
I'm hoping that the lower temperature for JSON generation will reduce the chance of generation failures. It would be even better to use the openai function calling mechanism; basically we'd force the model to call a function with the desired schema, and it'll generate JSON of the right format by construction.
TODO
show_reflections
(or similar) function.