Closed yaoice closed 2 months ago
To resolve a wrong type error in the Model provider's LLM mode, ensure that the mode
field in the model_properties
object of the AIModelEntity
is set to one of the valid LLMMode
values. The valid values for LLMMode
are:
complete
(Text completion)chat
(Dialogue)Here is an example of how to correctly set the mode
field:
from pydantic import BaseModel
class AIModelEntity(BaseModel):
model: str
model_properties: dict
# Example of correct usage
model_entity = AIModelEntity(
model="gpt-3.5-turbo",
model_properties={
"mode": "chat", # Ensure this is either "complete" or "chat"
"context_size": 2048
}
)
Additionally, ensure that the model_type
is correctly set to llm
in the YAML configuration file:
model: claude-2.1 # Model identifier
label:
en_US: claude-2.1
model_type: llm # Model type, claude-2.1 is an LLM
model_properties: # Model properties
mode: chat # LLM mode, complete for text completion model, chat for dialogue model
context_size: 200000 # Maximum context size supported
By ensuring both the mode
and model_type
fields are correctly set, you can avoid type errors in the Model provider's LLM mode [1][2][3].
To continue talking to Dosu, mention @dosu.