Open johnfelipe opened 1 year ago
tloen/alpaca-lora-7b: the original 7B Alpaca-LoRA checkpoint by tloen (updated by 4/4/2022) LLMs/Alpaca-LoRA-7B-elina: the 7B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/Alpaca-LoRA-13B-elina: the 13B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/Alpaca-LoRA-30B-elina: the 30B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/Alpaca-LoRA-65B-elina: the 65B Alpaca-LoRA checkpoint by Chansung (updated by 5/1/2022) LLMs/AlpacaGPT4-LoRA-7B-elina: the 7B Alpaca-LoRA checkpoint trained on GPT4 generated Alpaca style dataset by Chansung (updated by 5/1/2022) LLMs/AlpacaGPT4-LoRA-13B-elina: the 13B Alpaca-LoRA checkpoint trained on GPT4 generated Alpaca style dataset by Chansung (updated by 5/1/2022) stabilityai/stablelm-tuned-alpha-7b: StableLM based fine-tuned model beomi/KoAlpaca-Polyglot-12.8B: Polyglot based Alpaca style instruction fine-tuned model declare-lab/flan-alpaca-xl: Flan XL(3B) based Alpaca style instruction fine-tuned model. declare-lab/flan-alpaca-xxl: Flan XXL(11B) based Alpaca style instruction fine-tuned model. OpenAssistant/stablelm-7b-sft-v7-epoch-3: StableLM(7B) based OpenAssistant's oasst1 instruction fine-tuned model. Writer/camel-5b-hf: Palmyra-base based instruction fine-tuned model. The foundation model and the data are from its creator, Writer. lmsys/fastchat-t5-3b-v1.0: T5(3B) based Vicuna style instruction fine-tuned model on SharedGPT by lm-sys LLMs/Stable-Vicuna-13B: Stable Vicuna(13B) from Carpel AI and Stability AI. This is not a delta weight, so use it at your own risk. I will make this repo as private soon and add Hugging Face token field. LLMs/Vicuna-7b-v1.1: Vicuna(7B) from FastChat. This is not a delta weight, so use it at your own risk. I will make this repo as private soon and add Hugging Face token field. LLMs/Vicuna-13b-v1.1: Vicuna(13B) from FastChat. This is not a delta weight, so use it at your own risk. I will make this repo as private soon and add Hugging Face token field. togethercomputer/RedPajama-INCITE-Chat-7B-v0.1: RedPajama INCITE Chat(7B) from Together. mosaicml/mpt-7b-chat: MPT-7B from MOSAIC ML. teknium/llama-deus-7b-v3-lora: LLaMA 7B based Alpaca style instruction fine-tuned model. The only difference between Alpaca is that this model is fine-tuned on more data including Alpaca dataset, GPTeacher, General Instruct, Code Instruct, Roleplay Instruct, Roleplay V2 Instruct, GPT4-LLM Uncensored, Unnatural Instructions, WizardLM Uncensored, CamelAI's 20k Biology, 20k Physics, 20k Chemistry, 50k Math GPT4 Datasets, and CodeAlpaca HuggingFaceH4/starchat-alpha: Starcoder 15.5B based instruction fine-tuned model. This model is particularly good at answering questions about coding. LLMs/Vicuna-LoRA-EvolInstruct-7B: LLaMA 7B based Vicuna style instruction fine-tuned model. The dataset to fine-tune this model is from WizardLM's Evol Instruction dataset. LLMs/Vicuna-LoRA-EvolInstruct-13B: LLaMA 13B based Vicuna style instruction fine-tuned model. The dataset to fine-tune this model is from WizardLM's Evol Instruction dataset. project-baize/baize-v2-7b: LLaMA 7B based Baize project-baize/baize-v2-13b: LLaMA 13B based Baize timdettmers/guanaco-7b: LLaMA 7B based Guanaco which is fine-tuned on OASST1 dataset with QLoRA techniques introduced in "QLoRA: Efficient Finetuning of Quantized LLMs" paper. timdettmers/guanaco-13b: LLaMA 13B based Guanaco which is fine-tuned on OASST1 dataset with QLoRA techniques introduced in "QLoRA: Efficient Finetuning of Quantized LLMs" paper. timdettmers/guanaco-33b-merged: LLaMA 30B based Guanaco which is fine-tuned on OASST1 dataset with QLoRA techniques introduced in "QLoRA: Efficient Finetuning of Quantized LLMs" paper. tiiuae/falcon-7b-instruct: Falcon 7B based instruction fine-tuned model on Baize, GPT4All, GPTeacher, and RefinedWeb-English datasets. tiiuae/falcon-40b-instruct: Falcon 40B based instruction fine-tuned model on Baize and RefinedWeb-English datasets.
Do you really need them all? Most of them have not a stable deployed service.
Only stable or beta release
El lun, 5 jun 2023, 6:39 p. m., Sun Zhigang @.***> escribió:
Do you really need them all? Most of them have not a stable deployed service.
— Reply to this email directly, view it on GitHub https://github.com/sunner/ChatALL/issues/188#issuecomment-1577690281, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADIWFBIAQ5A2F7MKSJBXS3XJZU2VANCNFSM6AAAAAAY3ARNMQ . You are receiving this because you authored the thread.Message ID: @.***>
wich will be stable or beta is in roadmap?
Any integration needs labor. You can specify which one is what you need indeed.
Pls choose best stable or beta release, with programming language Tnks
El vie, 23 jun 2023, 8:06 p. m., Sun Zhigang @.***> escribió:
Any integration needs labor. You can specify which one is what you need indeed.
— Reply to this email directly, view it on GitHub https://github.com/sunner/ChatALL/issues/188#issuecomment-1605206886, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADIWFD4T7WWQCYEGFP3F5TXMY4RJANCNFSM6AAAAAAY3ARNMQ . You are receiving this because you authored the thread.Message ID: @.***>
Can u review if is possible put falcon 7b: https://youtu.be/FTm5C_vV_EY?t=1076
Can u review if is possible put falcon 7b: https://youtu.be/FTm5C_vV_EY?t=1076
I can't find a stable service of falcon