ai-shifu / ChatALL

Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vicuna, Claude, ChatGLM, MOSS, 讯飞星火, 文心一言 and more, discover the best answers
https://chatall.ai
Apache License 2.0
15.27k stars 1.65k forks source link

Falcon 7b[FEAT] #188

Open johnfelipe opened 1 year ago

johnfelipe commented 1 year ago
models
johnfelipe commented 1 year ago

tloen/alpaca-lora-7b: the original 7B Alpaca-LoRA checkpoint by tloen (updated by 4/4/2022) LLMs/Alpaca-LoRA-7B-elina: the 7B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/Alpaca-LoRA-13B-elina: the 13B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/Alpaca-LoRA-30B-elina: the 30B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/Alpaca-LoRA-65B-elina: the 65B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/AlpacaGPT4-LoRA-7B-elina: the 7B Alpaca-LoRA checkpoint trained on GPT4 generated Alpaca style dataset by Chansung (updated by 5/1/2022) LLMs/AlpacaGPT4-LoRA-13B-elina: the 13B Alpaca-LoRA checkpoint trained on GPT4 generated Alpaca style dataset by Chansung (updated by 5/1/2022) stabilityai/stablelm-tuned-alpha-7b: StableLM based fine-tuned model beomi/KoAlpaca-Polyglot-12.8B: Polyglot based Alpaca style instruction fine-tuned model declare-lab/flan-alpaca-xl: Flan XL(3B) based Alpaca style instruction fine-tuned model. declare-lab/flan-alpaca-xxl: Flan XXL(11B) based Alpaca style instruction fine-tuned model. OpenAssistant/stablelm-7b-sft-v7-epoch-3: StableLM(7B) based OpenAssistant's oasst1 instruction fine-tuned model. Writer/camel-5b-hf: Palmyra-base based instruction fine-tuned model. The foundation model and the data are from its creator, Writer. lmsys/fastchat-t5-3b-v1.0: T5(3B) based Vicuna style instruction fine-tuned model on SharedGPT by lm-sys LLMs/Stable-Vicuna-13B: Stable Vicuna(13B) from Carpel AI and Stability AI. This is not a delta weight, so use it at your own risk. I will make this repo as private soon and add Hugging Face token field. LLMs/Vicuna-7b-v1.1: Vicuna(7B) from FastChat. This is not a delta weight, so use it at your own risk. I will make this repo as private soon and add Hugging Face token field. LLMs/Vicuna-13b-v1.1: Vicuna(13B) from FastChat. This is not a delta weight, so use it at your own risk. I will make this repo as private soon and add Hugging Face token field. togethercomputer/RedPajama-INCITE-Chat-7B-v0.1: RedPajama INCITE Chat(7B) from Together. mosaicml/mpt-7b-chat: MPT-7B from MOSAIC ML. teknium/llama-deus-7b-v3-lora: LLaMA 7B based Alpaca style instruction fine-tuned model. The only difference between Alpaca is that this model is fine-tuned on more data including Alpaca dataset, GPTeacher, General Instruct, Code Instruct, Roleplay Instruct, Roleplay V2 Instruct, GPT4-LLM Uncensored, Unnatural Instructions, WizardLM Uncensored, CamelAI's 20k Biology, 20k Physics, 20k Chemistry, 50k Math GPT4 Datasets, and CodeAlpaca HuggingFaceH4/starchat-alpha: Starcoder 15.5B based instruction fine-tuned model. This model is particularly good at answering questions about coding. LLMs/Vicuna-LoRA-EvolInstruct-7B: LLaMA 7B based Vicuna style instruction fine-tuned model. The dataset to fine-tune this model is from WizardLM's Evol Instruction dataset. LLMs/Vicuna-LoRA-EvolInstruct-13B: LLaMA 13B based Vicuna style instruction fine-tuned model. The dataset to fine-tune this model is from WizardLM's Evol Instruction dataset. project-baize/baize-v2-7b: LLaMA 7B based Baize project-baize/baize-v2-13b: LLaMA 13B based Baize timdettmers/guanaco-7b: LLaMA 7B based Guanaco which is fine-tuned on OASST1 dataset with QLoRA techniques introduced in "QLoRA: Efficient Finetuning of Quantized LLMs" paper. timdettmers/guanaco-13b: LLaMA 13B based Guanaco which is fine-tuned on OASST1 dataset with QLoRA techniques introduced in "QLoRA: Efficient Finetuning of Quantized LLMs" paper. timdettmers/guanaco-33b-merged: LLaMA 30B based Guanaco which is fine-tuned on OASST1 dataset with QLoRA techniques introduced in "QLoRA: Efficient Finetuning of Quantized LLMs" paper. tiiuae/falcon-7b-instruct: Falcon 7B based instruction fine-tuned model on Baize, GPT4All, GPTeacher, and RefinedWeb-English datasets. tiiuae/falcon-40b-instruct: Falcon 40B based instruction fine-tuned model on Baize and RefinedWeb-English datasets.

sunner commented 1 year ago

Do you really need them all? Most of them have not a stable deployed service.

johnfelipe commented 1 year ago

Only stable or beta release

El lun, 5 jun 2023, 6:39 p. m., Sun Zhigang @.***> escribió:

Do you really need them all? Most of them have not a stable deployed service.

— Reply to this email directly, view it on GitHub https://github.com/sunner/ChatALL/issues/188#issuecomment-1577690281, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADIWFBIAQ5A2F7MKSJBXS3XJZU2VANCNFSM6AAAAAAY3ARNMQ . You are receiving this because you authored the thread.Message ID: @.***>

johnfelipe commented 1 year ago

wich will be stable or beta is in roadmap?

sunner commented 1 year ago

Any integration needs labor. You can specify which one is what you need indeed.

johnfelipe commented 1 year ago

Pls choose best stable or beta release, with programming language Tnks

El vie, 23 jun 2023, 8:06 p. m., Sun Zhigang @.***> escribió:

Any integration needs labor. You can specify which one is what you need indeed.

— Reply to this email directly, view it on GitHub https://github.com/sunner/ChatALL/issues/188#issuecomment-1605206886, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADIWFD4T7WWQCYEGFP3F5TXMY4RJANCNFSM6AAAAAAY3ARNMQ . You are receiving this because you authored the thread.Message ID: @.***>

johnfelipe commented 1 year ago

Can u review if is possible put falcon 7b: https://youtu.be/FTm5C_vV_EY?t=1076

sunner commented 1 year ago

Can u review if is possible put falcon 7b: https://youtu.be/FTm5C_vV_EY?t=1076

I can't find a stable service of falcon

johnfelipe commented 1 year ago

https://huggingface.co/tiiuae/falcon-7b-instruct

or

https://gpt.h2o.ai/