gongouveia / Resnet-Quantization-Experiments

Tools for per layer quantization, fp32, fp16 , PTQ and QAT int8 (int4 not yet implemented)
1 stars 2 forks source link

When I try to run this project, I find there is a bug in quantize_int8_PTQ.py #1

Open DomineeringDragon opened 5 days ago

DomineeringDragon commented 5 days ago

Quantize the model

model_prepared = tq.prepare(model_fused) model_quantized = tq.convert(model_prepared)

Define the quantization configuration

quant_config = tq.get_default_qconfig('fbgemm') model_fused.qconfig = quant_config

Should be changed into :

Define the quantization configuration

quant_config = tq.get_default_qconfig('fbgemm') model_fused.qconfig = quant_config

Quantize the model

model_prepared = tq.prepare(model_fused) model_quantized = tq.convert(model_prepared)

gongouveia commented 5 days ago

@DomineeringDragon Hello, this project is not updated, and I am not supporting it at the moment. Please let me know what is your objective and needs.