fiatrete / OpenDAN-Personal-AI-OS

OpenDAN is an open source Personal AI OS , which consolidates various AI modules in one place for your personal use.
https://opendan.ai
MIT License
1.64k stars 132 forks source link

Custom LLM #5

Closed carlcc closed 1 year ago

carlcc commented 1 year ago

I have already experienced Jarvis and found it very interesting.

It looks like you are researching a new self-developed LLM engine. When can we experience it? Will it be open source?

fiatrete commented 1 year ago

We tested a set of open-source LLMs but found that their intelligence level is currently insufficient to support stable task execution. We will continue to monitor and test emerging LLMs. Given the pace of development in the open-source model community, we believe that intelligent models with sufficient capabilities will soon become available.

Renegadesoffun commented 1 year ago

Hi. I know when I play around with wizard Vicuna , etc. It seems to make pretty good output so those might be worth looking into. Also when you use llama.cpp they have integrated a new ability to load part onto GPU and part on to CPU so ive been able to run 30B+ models fairly fast. So hopefully these could run the tasks efficiently Locally. Also im sure youve seen this guy but he has lots of tricks out there. Good luck! https://youtube.com/@Aitrepreneur Also have you integrated Langchain? I feel like that could help its longterm efficiency. Thanks

carlcc commented 1 year ago

We tested a set of open-source LLMs but found that their intelligence level is currently insufficient to support stable task execution. We will continue to monitor and test emerging LLMs. Given the pace of development in the open-source model community, we believe that intelligent models with sufficient capabilities will soon become available.

Very looking forward to the open source model.

carlcc commented 1 year ago

Hi. I know when I play around with wizard Vicuna , etc. It seems to make pretty good output so those might be worth looking into. Also when you use llama.cpp they have integrated a new ability to load part onto GPU and part on to CPU so ive been able to run 30B+ models fairly fast. So hopefully these could run the tasks efficiently Locally. Also im sure youve seen this guy but he has lots of tricks out there. Good luck! https://youtube.com/@Aitrepreneur Also have you integrated Langchain? I feel like that could help its longterm efficiency. Thanks

Good news, I'm glad to hear that. A powefull open source AI agent is likely just around the corner.