1) OpenAI models (and probably more non llama-cpp models) fail in the extract step, since there is a schema key in response format and the system prompt is formatted as a tuple.
2) If you have already created a query adapter, if you then try to updated it again (after generating more evals f.e.), sqlalchemy throws an error. This is because you are trying to add the same key to the metadata_ dir. The fix changes it in-place instead and uses flag_modified to make sure SQLAlchemy updates accordingly when commiting.
two small bugs i noted, which this PR should fix.
1) OpenAI models (and probably more non llama-cpp models) fail in the extract step, since there is a schema key in response format and the system prompt is formatted as a tuple. 2) If you have already created a query adapter, if you then try to updated it again (after generating more evals f.e.), sqlalchemy throws an error. This is because you are trying to add the same key to the metadata_ dir. The fix changes it in-place instead and uses flag_modified to make sure SQLAlchemy updates accordingly when commiting.