Closed amaze18 closed 5 months ago
added "gpt-4o" the model sir
is gpt-4o working @SaiGane5 ??
No sir (@amaze18), tried openai library it has gpt-4o but didn't work as we cannot pass temperature argument with openai library, llama_index is working but it doesn't have gpt-4o
Done sir, updated the earlier OpenAI with "from llama_index.llms.openai import OpenAI", and In this we don't have temperature argument, so removed it, now it's working
Traceback:
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "/mount/src/ledoux/streamlit_app.py", line 141, in
We will clear the message history for every 4 messages. So that the token size limit won't exceed. Or we will shift this entire thing to an EC2. With these two solutions I am closing this issue
m=["gpt-4-1106-preview","gpt-4-0125-preview"]
add llm= OpenAI(model="gpt-4o") as well in the used model..it has better performance