The OpenAI completions endpoint calls will crash when the stop tokens are specified as a list. This is the same issue with the chat endpoint when the chat messages were a non-hashable type. The solution for the chat endpoint was to stringify the kwargs so that the cache could hash the kwargs. Then the string would be parsed back into JSON.
This PR implements the same logic for the completions endpoints.
There are two potential drawbacks I see here:
(1) This will invalidate the cache of anyone who still uses the completions endpoints in DSPy
(2) The completions endpoints are no longer supported by OpenAI and are being phased out, so maybe we should instead focus on removing support from DSPy.
A potentially alternative solution would be to only stringify the kwargs if there is an unhashable type in the kwargs. This would be backwards compatible with old caches, but with the drawback that this check for hashable types would incur a delay on each call.
The OpenAI completions endpoint calls will crash when the stop tokens are specified as a list. This is the same issue with the chat endpoint when the chat messages were a non-hashable type. The solution for the chat endpoint was to stringify the kwargs so that the cache could hash the kwargs. Then the string would be parsed back into JSON.
This PR implements the same logic for the completions endpoints.
There are two potential drawbacks I see here: (1) This will invalidate the cache of anyone who still uses the completions endpoints in DSPy (2) The completions endpoints are no longer supported by OpenAI and are being phased out, so maybe we should instead focus on removing support from DSPy.
A potentially alternative solution would be to only stringify the kwargs if there is an unhashable type in the kwargs. This would be backwards compatible with old caches, but with the drawback that this check for hashable types would incur a delay on each call.