hkust-zhiyao / RTL-Coder

A new LLM solution for RTL code generation, achieving state-of-the-art performance in non-commercial solutions and outperforming GPT-3.5.
132 stars 16 forks source link

openai version and api_key problem #3

Closed MarkJiang-maji closed 6 months ago

MarkJiang-maji commented 10 months ago

Because openai has been upgraded to v1.9. I have upgraded the askGPT35 function related to openai (inside utils.py) into the newest version, but I still have the problem when running instruction_gen.py. Is this my API key problem? How can I fix this...

my askGPT35 function: def askGPT35(question, model='text-davinci-003', is_response=False, temperature=0.7): sleep_time = 2 api_key = os.environ.get("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") # 這裡確保你的 API 金鑰已經在環境變數中設置 if is_response is True: p_message = [ {'role': 'system', 'content': 'I want you act as a Professional Verilog coder.'}, {'role': 'user', 'content': question} ] else: p_message = [ {'role': 'system', 'content': ''}, {'role': 'user', 'content': question} ] max_gen_tokens = 2048 count = 0 while True: if count == 5: dic = {'finish_reason': 'length', 'text': ''} return [dic] try: client = openai.OpenAI(api_key=api_key) # 初始化 OpenAI 物件,提供 API 金鑰 response = client.chat.completions.create( model=model, messages=p_message, temperature=temperature, max_tokens=max_gen_tokens, ) print(response) ans = response['choices'][0]['message']['content'] dic = {'text': ans, 'finish_reason': response['choices'][0]['finish_reason']} break

    except Exception as e:
        if 'maximum context' in str(e):
            count += 1 
            max_gen_tokens = int(max_gen_tokens / 1.3)
        logging.warning(f"OpenAIError: {e}.")
        logging.warning("Hit request rate limit; retrying...")
        time.sleep(sleep_time)
return [dic]

Terminal: Loaded 10 seed instructions 0%| | 0/50 [00:00<?, ?it/s] WARNING:root:OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable. WARNING:root:Hit request rate limit; retrying...

DevinShang commented 10 months ago

Hi, could you please set openai api in your main function like this: ''' import openai openai.api_key = 'your api key' ''' and then you do not need to pass the attribute "api_key=api_key" when initializing the client object

MarkJiang-maji commented 10 months ago

Thank you for providing the requirements, but I'm still facing the following issue:

WARNING:root:OpenAIError: Invalid URL (POST /v1/engines/gpt-35-turbo/chat/completions). WARNING:root:Hit request rate limit; retrying...

how can i fix this?

DevinShang commented 10 months ago

We use the Openai service through Azure and there may be slight differences between the two ways. You may change the model name from "gpt-35-turbo" to "gpt-3.5-turbo". After all, if you use the API from the Openai website, we recommend you refer to the official documents for modifying the generation code.