norahsakal / fine-tune-gpt3-model

How you can fine-tune a GPT-3 model with Python with your own data
118 stars 33 forks source link

last peice of the code is not working #2

Open sanaakaddoura1 opened 1 year ago

sanaakaddoura1 commented 1 year ago

Hello, Thank you for this detailed tutorial. Your further support in my issue is appreciated.

I am not getting any problem except the last piece of code when I try to model. I am getting this error:

InvalidRequestError: Must provide an 'engine' or 'model' parameter to create a <class 'openai.api_resources.completion.Completion'>

this is appearing on this part of the code:

answer = openai.Completion.create( model=fine_tuned_model, prompt=new_prompt, max_tokens=10, # Change amount of tokens for longer completion temperature=0 ) answer['choices'][0]['text']

Hint: This part of the code is printing "none":

if fine_tune_response.fine_tuned_model == None: fine_tune_list = openai.FineTune.list() fine_tuned_model = fine_tune_list['data'][0].fine_tuned_model print("none")

norahsakal commented 1 year ago

Hi!

Thanks for getting in touch.

One thing I've seen, and which I added to the blog post, is troubleshooting if your variable fine_tuned_model is null/None, which yours seems to be.

Have you tried to call this function with your fine_tune_response.id variable -> retrieve_response = openai.FineTune.retrieve(fine_tune_response.id)

Like this: Fine-tuning debug

You can read about it more here, in section 7. Save fine-tuned model in the blog post: https://norahsakal.com/blog/fine-tune-gpt3-model

I hope this helps, if not, feel free to reach out again, I'm happy to help so you can get your model up and running.

Let me know how it goes!