pytorch-labs / gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
BSD 3-Clause "New" or "Revised" License
5.67k stars 513 forks source link

Try Tensor Parallel on a server equipped with two V100 linked by NVLINK, but got a performance degradation #111

Open duanzhaol opened 9 months ago

duanzhaol commented 9 months ago

I'm attempting to deploy llama2-7b-chat-hf on a server equipped with two V100 GPUs linked by NVLink, but I've encountered an issue where the tokens per second (token/s) performance is worse when utilizing both GPUs compared to just one. Specifically, with a single GPU, I achieve 29 tokens/s, but this drops to only 17 tokens/s when employing tensor parallelism.

image

I analyzed the process using Nsight Systems, which revealed that the all-reduce operation averages 500µs. This duration significantly exceeds that of the compute operations. Here's a snapshot from the Nsight Systems analysis: Nsight Systems Analysis

I have confirmed that NVLink is active, yet I'm puzzled by why the communication time is so prolonged. Could this be due to a configuration mistake? Below is a screenshot confirming NVLink's activation:

image

Furthermore, here is the command I used to run the program, explicitly setting NCCL_P2P_LEVEL=NVL to ensure the use of NCCL for peer-to-peer communication. The NCCL log indicates that P2P is being utilized: NCCL_P2P_LEVEL=NVL NCCL_DEBUG=TRACE OMP_NUM_THREADS=8 ENABLE_INTRA_NODE_COMM=1 torchrun --standalone --nproc_per_node=2 generate.py --checkpoint_path ../llama-2-7b-chat-hf/model.pth --prompt "Hello, my name is" --num_samples 2 Could anyone provide insights into why the communication overhead is so substantial, and if there are any potential configuration errors causing this inefficiency?

yifuwang commented 9 months ago

From the information you provided:

I suggest:

duanzhaol commented 9 months ago

Thank you for your response. I have confirmed the presence of a straggler issue. As illustrated in the attached images, the first GPU remains idle, waiting for the second GPU during the AllReduce operation. image However, I am puzzled by the cause of this behavior. I have ensured that no other programs are running on my system that could potentially cause interference. This setup was established by simply cloning this repository and executing the provided code. Could there be a misconfiguration or another underlying issue that I might have overlooked?

Chillee commented 8 months ago

@duanzhaol I don't think you're using compilation are you?

duanzhaol commented 8 months ago

@duanzhaol I don't think you're using compilation are you?

Yes, I haven't use compile in my process. Is compile a necessary step for tensor parallel? I think it should work without it.

Chillee commented 8 months ago

Compilation will significantly reduce the tensor-parallel latency.

In general, gpt-fast will not be particularly fast without using compilation :P

duanzhaol commented 8 months ago

I opted not to use compilation because my objective is to use tensor parallelism on a serverless platform. The initial compilation process is significantly time-consuming, which becomes impractical in our case since each request necessitates a fresh compilation. This overhead is unacceptable for our use case. If there were a method to persist or checkpoint the results of the compilation—similar to checkpoint an engine in TensorRT—it would greatly enhance efficiency. Unfortunately, I have yet to discover a tool or method that facilitates this capability. Any guidance or suggestions on how to address this challenge would be immensely appreciated

Chillee commented 8 months ago

@duanzhaol Out of curiosity, what level of overhead is acceptable?

duanzhaol commented 8 months ago

@duanzhaol Out of curiosity, what level of overhead is acceptable?

Maybe less than a second? In serverless if the function is pure stateless, every request need to recompile the model. And if it is optimized as a Model-as-a-Service platform, compilation will severely restrict the ability to scaled out new instance to handle the burst workloads.

Moreover, I'm still puzzled by the significant straggler issue without ompilation we're encountering. The kernel launch times, according to nsys traces, show considerable variability. Does this problem originate from the implementation of gpt-fast, or is it a broader issue associated with employing tensor parallelism in PyTorch without prior compilation?