microsoft / BitBLAS

BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
MIT License
438 stars 34 forks source link

How to achieve vLLM Row Parallelism correctly? #186

Open KeremTurgutlu opened 2 months ago

KeremTurgutlu commented 2 months ago

I am trying to make bitblas quantized weights to work with vLLM's tensor parallelism. In vLLM, tensor parallelism is achieved with column parallelism and row parallelism.

Similar to the reference vLLM integration here qweight, scales and zeros have the following shapes, input dims and output dims:

qweight = Parameter(
            torch.empty(
                output_size_per_partition,
                input_size_per_partition // self.layer_pack_factor,
                device="cuda",
                dtype=torch.uint8,
            ),
            requires_grad=False,
        )
        set_weight_attrs(
            qweight,
            {
                "input_dim": 1,
                "output_dim": 0,
                "packed_dim": 1,
                "pack_factor": self.layer_pack_factor,
            },
        )

        scales = Parameter(
            torch.empty(
                output_size_per_partition,
                input_size_per_partition // self.layer_group_size,
                device="cuda",
                dtype=params_dtype,
            ),
            requires_grad=False,
        )
        set_weight_attrs(
            scales,
            {
                "input_dim": 1,
                "output_dim": 0,
            },
        )

        zeros = Parameter(
            torch.empty(
                output_size_per_partition,
                input_size_per_partition // self.layer_group_size,
                device="cuda",
                dtype=params_dtype,
            ),
            requires_grad=False,
        )
        set_weight_attrs(
            zeros,
            {
                "input_dim": 1,
                "output_dim": 0,
            },
        )

Our vLLM bitblas integration can be found here.

Column Parallel

In column parallelism, parallelization is done along the weight's output dim and the outputs are concatenated along the output's output dim using an all gather operation. Testing this behavior with bitblas works as expected:

# Mimic the output without any tensor parallelism.
bitblas_output = matmul_eng(x, Wq_bitblas, scales_bitblas, zeros_bitblas)

# Reconfigure matmul eng for new dims:
matmul_config = bitblas.MatmulConfig(M=BITBLAS_OPT_M,
                                        N=out_features//2,
                                        K=in_features,
                                        A_dtype="float16",  
                                        W_dtype={4:"uint4",2:"uint2"}[NBITS],
                                        accum_dtype="float16",  
                                        out_dtype="float16",  
                                        layout="nt",  
                                        with_bias=False, 
                                        group_size=GROUPSIZE,
                                        with_scaling=True,  
                                        with_zeros=True,  
                                        zeros_mode="original",  
                                        #fast_decoding=True,
                                    )
matmul_eng = _get_or_create_bitblas_operator(matmul_config)     

# Split weights along the input dimenstion which is the 'N' dimension from matmul eq: (M x K @ K x N). 
Wq_bitblas_split_1, Wq_bitblas_split_2 = Wq_bitblas.split(split_size=Wq_bitblas.size(0) // 2, dim=0)
zeros_bitblas_split_1, zeros_bitblas_split_2 = zeros_bitblas.split(split_size=zeros_bitblas.size(0) // 2, dim=0)
scales_bitblas_split_1, scales_bitblas_split_2 = scales_bitblas.split(split_size=scales_bitblas.size(0) // 2, dim=0)
bitblas_output_split_1 = matmul_eng(x, Wq_bitblas_split_1, scales_bitblas_split_1, zeros_bitblas_split_1)
bitblas_output_split_2 = matmul_eng(x, Wq_bitblas_split_2, scales_bitblas_split_2, zeros_bitblas_split_2)

# Test passes.
bitblas_sharded_output = torch.cat([bitblas_output_split_1, bitblas_output_split_2], dim=1)
assert torch.allclose(bitblas_sharded_output, bitblas_output, atol=1e-2, rtol=1e-2)

Row Parallel

In row parallelism, parallelization is done along weight's input dim (K) and reduced (summed) along the via all reduce (sum) operation.

The outputs from the following test is very different compared to without tensor parallel.

# Mimic the output without any tensor parallelism.
bitblas_output = matmul_eng(x, Wq_bitblas, scales_bitblas, zeros_bitblas)

# Reconfigure matmul eng for new dims:
matmul_config = bitblas.MatmulConfig(M=BITBLAS_OPT_M,
                                        N=out_features,
                                        K=in_features//2,
                                        A_dtype="float16",  
                                        W_dtype={4:"uint4",2:"uint2"}[NBITS],
                                        accum_dtype="float16",  
                                        out_dtype="float16",  
                                        layout="nt",  
                                        with_bias=False, 
                                        group_size=GROUPSIZE,
                                        with_scaling=True,  
                                        with_zeros=True,  
                                        zeros_mode="original",  
                                        #fast_decoding=True,
                                    )
matmul_eng = _get_or_create_bitblas_operator(matmul_config)

# Split weights along the input dimenstion which is the 'K' dimension from matmul eq: (M x K @ K x N). 
Wq_bitblas_split_1, Wq_bitblas_split_2 = Wq_bitblas.split(split_size=Wq_bitblas.size(1) // 2, dim=1)
zeros_bitblas_split_1, zeros_bitblas_split_2 = zeros_bitblas.split(split_size=zeros_bitblas.size(1) // 2, dim=1)
scales_bitblas_split_1, scales_bitblas_split_2 = scales_bitblas.split(split_size=scales_bitblas.size(1) // 2, dim=1)

# Also split the input along the K dimension.
x_1, x_2 = x.split(split_size=x.size(1) // 2, dim=1)

bitblas_output_split_1 = matmul_eng(x_1, Wq_bitblas_split_1, scales_bitblas_split_1, zeros_bitblas_split_1)
bitblas_output_split_2 = matmul_eng(x_2, Wq_bitblas_split_2, scales_bitblas_split_2, zeros_bitblas_split_2)

# Test fails
bitblas_sharded_output = bitblas_output_split_1 + bitblas_output_split_2
assert torch.allclose(bitblas_sharded_output, bitblas_output, atol=1e-2, rtol=1e-2)
Screenshot 2024-09-18 at 12 31 01 PM

Is there a wrong assumption here regarding the layout and/or packing? Maybe the zeros and scales are not correctly split? Appreciate your help. Thanks.

LeiWang1999 commented 2 months ago

@KeremTurgutlu ,Thanks for your reporting! I'll take a look soon

LeiWang1999 commented 2 months ago

@KeremTurgutlu Do we have a unit test script for us to reproduce? I can reproduce with tp=2 on a end2end model.

KeremTurgutlu commented 2 months ago

Yes I think tp=2 should be a good start. Easiest would be to rerun the python code samples I shared in this issues because they isolate the issue to a single matmul rather than end2end model testing.

KeremTurgutlu commented 2 months ago

@LeiWang1999 Anything else needed from my side? Thanks.

LeiWang1999 commented 2 months ago

@tzj-fxz thanks, I'm working on it, the last few days have been busy.