chtmp223 / topicGPT

Code & Prompts for TopicGPT: A Prompt-Based Framework for Topic Modeling
188 stars 29 forks source link

Alternative models in OpenAI #6

Open nateray42 opened 1 month ago

nateray42 commented 1 month ago

Hi, after using the module, I realized that your code only accepts specific types of models in openAI, which are GPT 3.5 and GPT 4 (not turbo).

Since there are multiple other models and maybe potential models that have different names, for example, OpenAI just announced GPT-4o, can you implement the code to accept those models?

kevin-weitgenant commented 1 month ago

You can edit the function api_call on utils.py

Maybe removing if deployment_name in ["gpt-35-turbo", "gpt-4", "gpt-3.5-turbo"]: and just setting a try catch instead is a good solution for what you want

    """
    Call API (OpenAI, Azure, Perplexity) and return response
    - prompt: prompt template
    - deployment_name: name of the deployment to use (e.g. gpt-4, gpt-3.5-turbo, etc.)
    - temperature: temperature parameter
    - max_tokens: max tokens parameter
    - top_p: top p parameter
    """
    time.sleep(5)  # Change to avoid rate limit

    try:
        response = client.chat.completions.create(
            model=deployment_name,
            temperature=float(temperature),
            max_tokens=int(max_tokens),
            top_p=float(top_p),
            messages=[
                {"role": "system", "content": ""},
                {"role": "user", "content": prompt},
            ],
        )
        return response.choices[0].message.content
    except Exception as e:
        # Handle any exceptions here
        print(f"An error occurred: {e}")
        return None  # Or you can raise an exception if required
kevin-weitgenant commented 1 month ago

Actually, sorry. Several functions in the code need to be changed to fix this issue.