ju-bezdek / langchain-decorators

syntactic sugar 🍭 for langchain
MIT License
228 stars 11 forks source link

Update Default OpenAI LLM to "gpt-4-1106-preview" with Parallel Functionality #9

Closed jvroth18 closed 1 year ago

jvroth18 commented 1 year ago

This pull request updates the GlobalSettings configuration to use the "gpt-4-1106-preview" model of OpenAI's language learning models (LLMs). This change incorporates the latest developer release from OpenAI dated 11/6/23, which introduces support for parallel function calling. This enhancement promises to significantly improve performance by allowing concurrent operations, making our application more efficient and responsive.

ju-bezdek commented 1 year ago

Hey, thanks for the PR, but...

we cant just swap default GPT-3.5 model for GPT4

there would be a significant increase in costs, and take into account that a some ppl use this also for simple functions like name extraction etc.

What I though of doing here is to swap the default GPT-4 model for the new, preview version

Than you can use GPT4 wherever you want either by using prompt_type argument, as shown here Using predefined prompt types

or by choosing llm_selector_rule (will automatically select the GPT4 model by estimating the context window and prediction size) as shown here: #automatic-llm-selection

If you want to use GPT4 as a default for your whole repo, that is certainly possible by defining global settings somewhere in your code, as described here: https://github.com/ju-bezdek/langchain-decorators#defining-custom-settings

ju-bezdek commented 1 year ago

Ok, so new version 0.3.0 has been published.. gpt-4-preview is currently the default gpt-4 model

but unless you specify it by setting prompt type, or llm_selector_key="gpt-4", the lastest gpt-3.5 with 16k window will be used