Closed jovasque156 closed 11 months ago
@jovasque156 can you set model="ft:gpt-3.5-turbo:my-org:custom_suffix:id",
like this
litellm expects fine tuned gpt-3.5 to be passed like that: https://docs.litellm.ai/docs/tutorials/finetuned_chat_gpt
I tried, but still got the error.
My finetuned model is on gpt-3.5-turbo-0613, therefore it has an extra -0613
, like this: ft:gpt-3.5-turbo-0613:my-org:custom_suffix:id
. It's the only difference with the format.
Would that be the issue?
I’m going to close this one as stale for now, but feel encouraged to reopen it if there is more to discuss or the issue is still present in the latest version.
Describe the bug
I attempted to use a fine-tuned model from gpt3.5-turbo that I have associated with my API Key in OpenAI. After setting interpreter.model = 'id_of_my_fine-tuned_model', it raises a ValueError when trying to make a regular OpenAI call.
Below is the complete version of the error message. I have removed the id of my fine-tuned model and replaced it with the string
id_fine-tuned_model
.I'm not certain if your project supports fine-tuned models in OpenAI. If it doesn't, please consider re-labelling this issue as an enhancement and potentially including this feature in the future.
Thank you so much for this project! It's amazing!
Reproduce
Set model:
If you are finetuning gpt-3.5-turbo, the id should be something like this: 'ft:gpt-3.5-turbo-0613:org:sufx:1aa1a1AAA"
Call chat method with an instruction:
Expected behavior
Similar to the example of your colab but with the tone of my finetuned model.
Screenshots
No response
Open Interpreter version
0.1.4
Python version
3.10.13
Operating System name and version
macOS Ventura 13.5.1
Additional context
No response