pytorch / torchchat

Run PyTorch LLMs locally on servers, desktop and mobile
BSD 3-Clause "New" or "Revised" License
3.37k stars 221 forks source link

linear:int4 quantization regression testing #1362

Open mikekgfb opened 6 days ago

mikekgfb commented 6 days ago

🚀 The feature, motivation and pitch

In the past, we padded int4 quantization with non-multiple group size to make things work. Since we have decided to remove the padding, int4 quantization is now simply skipped for non-multiple groups. This means, among other things, that int4 quantization is no longer tested because the stories model uses non-multiple-of-256.

  Time to load model: 0.19 seconds
  Quantizing the model with: {'executor': {'accelerator': 'cuda'}, 'precision': {'dtype': 'bf16'}, 'linear:int4': {'groupsize': 256}}
  Skipping quantizing weight with int4 weight only quantization because the shape of weight torch.Size([288, 288]) is not compatible with group_size 256
  Skipping quantizing weight with int4 weight only quantization because the shape of weight torch.Size([288, 288]) is not compatible with group_size 256
  Skipping quantizing weight with int4 weight only quantization because the shape of weight torch.Size([288, 288]) is not compatible with group_size 256
  Skipping quantizing weight with int4 weight only quantization because the shape of weight torch.Size([288, 288]) is not compatible with group_size 256

Some options:

Alternatives

Put padding back into int4 quantization.

Yes, it's not ideal, then again, suppressing quantization is not either. In my own experience, just making things work increases utility for end users, if there's real concern about performance (int4 quantization with padding may still beat non-quantization!), pad and issue a warning to users.

Additional context

No response

RFC (Optional)

No response

Jack-Khuu commented 4 days ago

Nice catch, now that we have the 1B models we might be able to switch over to that instead of stories for a more representative model. (or change the groupsize)

As for padding vs not padding, I'll need to think about that before a make a call.

Jack-Khuu commented 4 days ago

We should update the test so that this doesn't happen

https://github.com/pytorch/torchchat/blob/main/.github/workflows/pull.yml#L335

mikekgfb commented 3 days ago
  • On the other hand, it adds feature overhead to show a practice that I'm not familiar with whether we should advertise, they should just update the groupsize instead

We currently apply a uniform transformation to all layers, so we are limited to the GCD of all layers. For language-llama (can't believe it's almost a year ago we brought up LLM support with the translation model), that GCD was small. So, we need a solution.

We sorta have several options (sorta because we may need multiple for a full solution): 1 - padding. IIRC inductor pads GEMMs for better alignment on tensor cores. So padding is def in the cards, but... it is a crutch. (Crutch sounds pejorative, but.... A crutch is much better than no solution!) 2 - Support for groupwise GEMM with a last group that's not a full group. It's easy-ish to implement (but might downstream still trigger padding, at least for the Triton path to map onto tensor cores. So, we might just transfer the responsibility for padding elsewhere. Not sure how well XNNPACK can handle a last partial group. Digant might have ideas...) 3 - some torchchat quantizers have a filter that allows to apply linear to specific layers (see https://github.com/pytorch/torchchat/blame/93f713f12507b5cef18a42c411030c90b9326369/torchchat/utils/quantize.py#L600 and https://github.com/pytorch/torchchat/blame/93f713f12507b5cef18a42c411030c90b9326369/torchchat/utils/quantize.py#L631-L633). The example applies this to just a 2-class system, the output projection (which is wildly different from the other layers in dimension - much larger! ). I've discussed with @jerryzh168 in the past having a more general filter (e.g., regexp on the name). Downside is that now you need to know all sorts of internals like layer names, but if you're tuning at that level you need a lot of detail. Also, under (1) padding you were concerned about bringing another concept (no, you are not bringing the concept, you just make it work and nobody will ever ask unless they have to.... that gives you the choice of getting something with nice UX where the user doesn't need to study in detail, and allow users to deep dive for incremental performance.

Because of how much I wanted to put on the engineer doing the quantization, this is why I just did the bi-partition into output/!output node_type, also makes padding attractive because you can get something running without getting a PhD in quantization!

mikekgfb commented 2 days ago

Nice catch, now that we have the 1B models we might be able to switch over to that instead of stories for a more representative model.

Let's make sure it does not explode the runtime for tests to not become onerous (when developers will start ignoring or argue for less tests etc)

Also, to get the 1B model, we need the HF tokens. which leads to periodic tests with llama models to fail today as per #1351