camel-ai / camel

🐫 CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
https://www.camel-ai.org
Apache License 2.0
5.32k stars 647 forks source link

Add the the use of a free, open source LLM model locally for Camel #117

Closed jbdatascience closed 1 year ago

jbdatascience commented 1 year ago

I also would like the addition of a free, open source LLM model locally for Camel.

——-

EDIT May 26, 2023

Perhaps this is an even better alternative than HuggingChat :

Gorilla - LLM with massive connected API !!!!!

https://gorilla.cs.berkeley.edu/

https://twitter.com/intuitmachine/status/1661812484062281744?s=46&t=X6lS_Zp1k_p2yWVr44f_7A

——-

My suggestion:

Perhaps HuggingChat v0.2 is a good alternative:

https://huggingface.co/chat/

HuggingChat v0.2Making the community's best AI chat models available to everyone.

NEWChat UI is now open sourced on GitHubGitHub repohttps://github.com/huggingface/chat-ui

Current Model OpenAssistant/oasst-sft-6-llama-30bModelhttps://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor

Dataset https://huggingface.co/datasets/OpenAssistant/oasst1

Website https://open-assistant.io/

But because things are changing rapidly in these day and age, perhaps there are already other alternatives ! My application would be to use free open source LLMs instead of the OpenAI LLM that in the practical example that is being showcased here:

Camel + LangChain for Synthetic Data & Market Research https://www.youtube.com/watch?v=GldMMK6-_-g

COLAB Notebook: https://colab.research.google.com/drive/1BuudlvBrKBl1bNhMp-aB5uOW62-JlrSY?usp=sharing

Or in the example on your website: camel_demo.ipynb https://colab.research.google.com/drive/1AzP33O8rnMW__7ocWJhVBXjKziJXPtim?usp=sharing

orophix commented 1 year ago

To make this work the local LLM system needs to be GPU friendly. Trying autogpt on cpu is almost unusable.

jbdatascience commented 1 year ago

To make this work the local LLM system needs to be GPU friendly. Trying autogpt on cpu is almost unusable.

I think there are a lot of Open Source Large Language Models out there in the Open that can run locally on GPU ! E.g. the leader of the Open Source LLM leaderboard named Falcon can run on a GPU: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard https://huggingface.co/tiiuae/falcon-40b-instruct

Falcon-40B is the best open-source model available. It outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard.

About the hardware requirements: I just posed that question to this article on Medium: https://medium.com/@bnjmn_marie/fine-tune-falcon-7b-on-your-gpu-with-trl-and-qlora-4490fadc3fbb What are the hardware requirements for doing inference lolcally on your PC with the Falcon 40B (and the Falcon 7B)?

I am waiting for the answer!

lightaime commented 1 year ago

Supporting local models is to be added with this PR: https://github.com/camel-ai/camel/pull/245.