Closed sushmanthreddy closed 12 months ago
@mmirman I would like to work on this issue..
@VictorOdede @abhigya-sodani Why isn't there finetuning support for OAI here? I recall when I first made the optimizer endpoint there was finetuning for OAI
Last I remember, there was finetuning for OpenAI too.
@mmirman @abhigya-sodani it has been commented out as per the present version anarchy the code has been commented out and exception has been raised over there. No fine tuning support for it
@sushmanthreddy https://github.com/anarchy-ai/LLM-VM/blob/main/src/llm_vm/completion/optimize.py#L27 There is finetuning support for OAI just not in this file. I'm asking @VictorOdede and @cartazio why not in this file
@sushmanthreddy https://github.com/anarchy-ai/LLM-VM/blob/main/src/llm_vm/completion/optimize.py#L27 There is finetuning support for OAI just not in this file. I'm asking @VictorOdede and @cartazio why not in this file
Not sure why it's not in this file but will look into it asap
@VictorOdede have u got any update on this?? if any changes are needed pls let me knew
@mmirman according to my understanding, finetuning for gpt 3.5 wasn't available at the time of writing this file. OpenAI just added support for this a few weeks ago
Will have a look at the PR soon @sushmanthreddy
Also, this issue is already being addressed by #132
Duplicate issue
In the
finetune
method of the code, it appears that fine-tuning is not currently supported for the OpenAI model. The method raises an exception indicating this limitation. However, there's commented-out code that suggests there might have been an attempt or a plan to support fine-tuning in the future.