Chat effortlessly, execute commands, and interpret code with Llama3, Phi3, and more - your local AI assistant. Enjoy seamless interaction while ensuring ultimate privacy
Some systems may not be able to load the model quickly or fail to load it at all. As they are hardware intensive, this might result in the answer of the query being delivered very late and can be inaccurate as well.
Solution :
We need to integrate OpenAI model using OpenAI API thus if system LLM model fails to load or answers are incorrect/inaccurate user should be able to retireve the answer using OpenAI.
Tasks to be done :
[ ] OpenAI integration to CLI mode
[ ] OpenAI integration to UI mode
Expected behaviour :
Once OpenAI is integrated, user should be able use OpenAI using python main.py --model_type --openai for CLI and select OpenAI model in ui mode.
Current behaviour :
Some systems may not be able to load the model quickly or fail to load it at all. As they are hardware intensive, this might result in the answer of the query being delivered very late and can be inaccurate as well.
Solution :
We need to integrate OpenAI model using OpenAI API thus if system LLM model fails to load or answers are incorrect/inaccurate user should be able to retireve the answer using OpenAI.
Tasks to be done :
Expected behaviour :
Once OpenAI is integrated, user should be able use OpenAI using
python main.py --model_type --openai
for CLI and select OpenAI model in ui mode.