The vigogne (French name for vicuΓ±a) is a South American camelid native to the Andes Mountains. It is closely related to the llama, alpaca, and guanaco.
Vigogne is a collection of powerful π«π· French large language models (LLMs) that are open-source and designed for instruction-following and chat purposes.
The main contributions of this project include:
π‘ The screencast below shows the current π¦ Vigogne-7B-Chat model running on Apple M1 Pro using 4GB of weights (no sped up).
git clone https://github.com/bofenghuang/vigogne.git
cd vigogne
# Install DeepSpeed if want to accelerate training with it
pip install deepspeed
# Install FlashAttention to further speed up training and reduce memory usage (essential for long sequences)
pip install packaging ninja
# For FlashAttention 1
# pip install --no-build-isolation flash-attn<2
# For FlashAttention 2
# Might takes 3-5 minutes on a 64-core machine
pip install --no-build-isolation flash-attn
pip install .
The fine-tuned π¦ Vigogne models come in two types: instruction-following models and chat models. The instruction-following models are optimized to generate concise and helpful responses to user instructions, similar to text-davinci-003
. Meanwhile, the chat models are designed for multi-turn dialogues, but they also perform well in instruction-following tasks, similar to gpt-3.5-turbo
.
More information can be found in vigogne/model.
This repository offers multiple options for inference and deployment, including Google Colab notebooks, Gradio demos, FastChat, and vLLM. It also offers guidance on conducting experiments using llama.cpp on your personal computer.
More information can be found in vigogne/inference.
This repository provides integration examples for incorporating Vigogne models into diverse application ecosystems, including LangChain.
More information can be found in vigogne/application.
The Vigogne models were trained on a variety of datasets, including open-source datasets, ChatGPT-distillation datasets (self-instruct, self-chat, and orca-style data), and translated datasets.
More information can be found in vigogne/data.
The Vigogne models were mostly instruction fine-tuned from other foundation models.
More information can be found in vigogne/training.
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
Our project builds upon the following open-source projects for further development. We would like to extend our sincerest gratitude to the individuals involved in the research and development of these projects.
If you find the model, data, and code in our project useful, please consider citing our work as follows:
@misc{vigogne,
author = {Bofeng Huang},
title = {Vigogne: French Instruction-following and Chat Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/bofenghuang/vigogne}},
}