onuratakan / gpt-computer-assistant

gpt-4o for windows, macos and linux
MIT License
4.75k stars 441 forks source link

Llama3 & Gemini Pro support #111

Closed Mideky-hub closed 3 weeks ago

Mideky-hub commented 3 weeks ago

Avoid having not up-to-date packages

onuratakan commented 3 weeks ago

Amazing.

onuratakan commented 3 weeks ago

I will seperate the api key save section.

Mideky-hub commented 3 weeks ago

Sure, thank you to you, you got great work! It's nice to work on this project! I will soon review the load API key function to have standardized way to compile everything. Maybe having a ModelFactory to generate model easily as indicate another issue in order to let the user add the model they want more easily? This can help us by having only 3-4 generic model and letting the user import their own model to have presets that they can share with each other etc.. what do you think about that @onuratakan ?

onuratakan commented 3 weeks ago

Sure, thank you to you, you got great work! It's nice to work on this project! I will soon review the load API key function to have standardized way to compile everything. Maybe having a ModelFactory to generate model easily as indicate another issue in order to let the user add the model they want more easily? This can help us by having only 3-4 generic model and letting the user import their own model to have presets that they can share with each other etc.. what do you think about that @onuratakan ?

Thanks, i think we should give an ability to customize llm like

from gpt_computer_assistant import start

your_custom_langchain_llm = OpenAI()

start(your_custom_langchain_llm )
onuratakan commented 3 weeks ago

Also i think we should add click something on the screen ability. I just added it to roadmap. Just for your info

Mideky-hub commented 3 weeks ago

Sure, thank you to you, you got great work! It's nice to work on this project! I will soon review the load API key function to have standardized way to compile everything. Maybe having a ModelFactory to generate model easily as indicate another issue in order to let the user add the model they want more easily? This can help us by having only 3-4 generic model and letting the user import their own model to have presets that they can share with each other etc.. what do you think about that @onuratakan ?

Thanks, i think we should give an ability to customize llm like

from gpt_computer_assistant import start

your_custom_langchain_llm = OpenAI()

start(your_custom_langchain_llm )

This could only work if the LLM is langchain coded, otherwise it will just throw error, is there an overlapping module that encapsulate langchain and another LLM-base that regroup/holds other LLMs?

onuratakan commented 3 weeks ago

Sure, thank you to you, you got great work! It's nice to work on this project! I will soon review the load API key function to have standardized way to compile everything. Maybe having a ModelFactory to generate model easily as indicate another issue in order to let the user add the model they want more easily? This can help us by having only 3-4 generic model and letting the user import their own model to have presets that they can share with each other etc.. what do you think about that @onuratakan ?

Thanks, i think we should give an ability to customize llm like

from gpt_computer_assistant import start

your_custom_langchain_llm = OpenAI()

start(your_custom_langchain_llm )

This could only work if the LLM is langchain coded, otherwise it will just throw error, is there an overlapping module that encapsulate langchain and another LLM-base that regroup/holds other LLMs?

Actualy the largest one is langchain. Also they add new llms.