neuralmagic / sparseml

Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Apache License 2.0
2.07k stars 148 forks source link

Default quantization- True or false in SparseGPT #2357

Open sriyachakravarthy opened 1 month ago

sriyachakravarthy commented 1 month ago

Hi! in the recipe, if i do not want to quantize and perform structured pruning, is it okk to give quantize:false like below and do not provide QuantizationModifier in the recipe?

SparseGPTModifier:
  sparsity: 0.5
  block_size: 128
  sequential_update: true
  quantize: false
  percdamp: 0.01
  mask_structure: "16:32"
  targets: ["re:model.layers.\\d+$"]
rahul-tuli commented 1 month ago

Hi @sriyachakravarthy,

Thank you for reaching out and opening an issue on SparseML!

The SparseGPTModifier no longer accepts a quantize argument, so you can safely remove it from your recipe. This will ensure that your model remains unquantized without affecting the pruning process.

Additionally, I’d recommend considering our latest framework, LLMCompressor, which offers enhanced capabilities for model compression. If you're open to using it, the recipe would look slightly different:

oneshot_stage:
  pruning_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      block_size: 128
      sequential_update: true
      percdamp: 0.01
      mask_structure: "16:32"
      targets: ["re:model.layers.\\d+$"]
sriyachakravarthy commented 1 month ago

Thank you, @rahul-tuli , will try

sriyachakravarthy commented 1 month ago

Also, will the llm-compressor run on an AMD machine?

markurtz commented 1 month ago

Hi @sriyachakravarthy, I'd like to clarify a bit more about this. Our LLM Compressor flows are currently for vLLM / our compression pathways for GPUs and specifically for Transformers models. SparseML is still used to create compressed ONNX models that can run in DeepSparse and ONNX Runtime for NLP, NLG, and CV models.

For AMD, SparseML will work for AMD CPUs, and LLM Compressor will work for AMD GPUs.

Hope this helps!

sriyachakravarthy commented 1 month ago

Yes, Thanks!!

sriyachakravarthy commented 1 month ago

Hi! I do not see model size reduction after pruning using llmpcompressor framework. Kindly help