Closed JG-11 closed 1 year ago
Hello! The thing is that because the SDK allows you to also ingest your own LLM Models that are deployed on cloud or somewhere else, the model name is your own custom model name deployed, but we'll gladly make that changes in the documentation for further cases! Thank you for your comments! It does help us a lot
Despite the existence of various explanations for the
model_name
field, specifically pertaining to ChatGPT models available on the Internet (refer to ScriptbyAI for an example), there remains a distinct lack of clarity regarding the available name options within this SDK documentation. For instance, the model_nameInnovationGPT2
mentioned in context with theOPENAI_CHAT_MODEL_NAME
environment variable in the provided example does not have a matching counterpart within the available documentation. Consequently, users are unable to ascertain in advance the number of tokens that can be processed or decide which model is best suited to handle their specific data source size.As an illustration, one might choose
gpt-4-32k-0613
for processing large text sizes exceeding 30,000 tokens. However, without comprehensive documentation, making such decisions becomes significantly challenging.We, therefore, suggest enhancing this portion of the documentation to include a detailed list of available
model_name
options along with corresponding explanations, enabling users to make informed decisions based on their specific needs and data source sizes.