h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.22k stars 1.23k forks source link

Add option to replace attention with flash attention #25

Closed ryanchesler closed 1 year ago

ryanchesler commented 1 year ago

Flash attention has already been integrated into gpt-neox models here: https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/models/gpt.py#L215

Can add the swapped model definition as an option to the training and generation scripts and benchmark the speed difference.

Converting Llama and others might be more work. it uses a pretty standard looking attention, but not sure how it differs from the pytorch default. Might just need to remap some layer names https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L160

pseudotensor commented 1 year ago

Some old notes from slack last week:

Right now, nothing in torch/huggingface directly can be used to do flash attention. One would need to swap-out the layer, which is possible as this is what gpt-neox repo does. I'll have to look more carefully with this approach to see how to do it, similar to how the other vicuna repo does for llama. And alternative is to use gpt-neox repo directly with their training code, which is probably fine. I installed all their dependencies and nothing had issues.

source ~/.bashrc.mamba
mamba create -n gptneox
conda activate gptneox
mamba install python=3.8 -y
mamba install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia -y
cd gpt-neox/
pip install -r requirements/requirements.txt
mamba install cudatoolkit-dev=11.7 cudatoolkit=11.7 -c conda-forge -c nvidia -y
unset CUDA_HOME
python ./megatron/fused_kernels/setup.py install
pip install -r ./requirements/requirements-flashattention.txt
cd ..
git clone https://github.com/EleutherAI/DeeperSpeed.git
cd DeeperSpeed
./install.sh

cuda 11.7 required.

pseudotensor commented 1 year ago

WIP for neox using flash in huggingface transformers, but no work for last 3 months, so probably dead: https://github.com/conceptofmind/flash-gpt

pseudotensor commented 1 year ago

Amazon thing: https://aws.amazon.com/blogs/machine-learning/new-performance-improvements-in-amazon-sagemaker-model-parallel-library/

To help our customers further minimize training costs and accelerate time-to-market, we are thrilled to introduce two new performance improvements in SageMaker model parallel — SMDDP Collectives and FlashAttention. SMDDP Collectives is the most performant collective library on AWS infrastructure for large model training offered by SageMaker distributed data parallel library. FlashAttention is introduced in Dao et al., which re-implements the attention mechanism in an IO-aware manner, reducing the memory bandwidth requirement and saving on attention speed and memory footprint. These two components collectively push our sharded data parallel technique to be 30.58% faster when training a 100B parameter GPT-NeoX model on 32 p4d.24xlarge instances. For customers who are already using sharded data parallel on supported models, no code changes are necessary to benefit from the performance boost offered by these latest features.

So maybe we should use sagemaker. I noticed this before somewhere else I think. But unsure how compatible with other weights e.g. huggingface

100B parameter GPT-NeoX model on 32 p4d.24xlarge instances

pseudotensor commented 1 year ago

You can use the same install above to then make llama use flash attention using the wrappers/patches from vicunda model: https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train_mem.py#L5 So we can already do that for llama case if we are interested.

pseudotensor commented 1 year ago

https://github.com/EleutherAI/gpt-neox/pull/725

arnocandel commented 1 year ago

^ specifically https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/flash_attention.py

arnocandel commented 1 year ago

128