neuralmagic / sparseml

Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Apache License 2.0
2.08k stars 148 forks source link

Update pydantic support to v2 #2248

Closed rahul-tuli closed 7 months ago

rahul-tuli commented 7 months ago

This PR updates sparseml pydantic support to v2

Steps:

Test methodology:

Sanity Script:

# local/feature/main.py

from sparseml.core import Recipe
import pprint as pp

import yaml

pp = pp.PrettyPrinter(indent=4)

recipe = """

first_oneshot_stage:
    pruning_modifiers:
        ConstantPruningModifier:
            start_epoch: 0
            end_epoch: 5
            targets: __ALL_PRUNABLE__

        MagnitudePruningModifier:
            start_epoch: 5
            end_epoch: 10
            init_sparsity: 0.1
            final_sparsity: 0.5
            targets: __ALL_PRUNABLE__

    quantization_modifiers:
        QuantizationModifier:
            start_epoch: 10
            end_epoch: 15
            bits: 8
            targets: __ALL_PRUNABLE__

second_oneshot_stage:
    pruning_modifiers:
        ConstantPruningModifier:
            start_epoch: 15
            end_epoch: 20
            targets: __ALL_PRUNABLE__

        MagnitudePruningModifier:
            start_epoch: 20
            end_epoch: 25
            init_sparsity: 0.1
            final_sparsity: 0.5
            targets: __ALL_PRUNABLE__

    quantization_modifiers:
        QuantizationModifier:
            start_epoch: 25
            end_epoch: 30
            bits: 8
            targets: __ALL_PRUNABLE__
"""

recipe = Recipe.create_instance(recipe)

yaml_str = recipe.yaml()
# print(yaml_str)
# print("++"* 20)

recipe_2 = Recipe.create_instance(yaml_str)
recipe_2_yaml = recipe_2.yaml()
# print(recipe_2_yaml)

assert yaml_str.strip() == recipe_2_yaml.strip()

Note: tests on GHA will fail until equivalent sparsezoo diff https://github.com/neuralmagic/sparsezoo/pull/483 lands