quic / aimet

AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
https://quic.github.io/aimet-pages/index.html
Other
2.16k stars 384 forks source link

TypeError: descriptor 'masked_fill_' for 'torch._C._TensorBase' objects doesn't apply to a 'Proxy' object #2655

Open Francis235 opened 10 months ago

Francis235 commented 10 months ago

When I use aimet autoquant to quant my model, I met the following issues:

My code as following: ... prepared_net_g_encoder = prepare_model(symbolic_traced_net_g_encoder)

auto_quant = AutoQuant(prepared_net_g_encoder,
                   dummy_input=(x, x_l, spk, speed),
                   data_loader=unlabeled_vits_data_loader,
                   eval_callback=eval_callback)
sim, initial_accuracy = auto_quant.run_inference()

... prepare_model() and AutoQuant() can run pass without error, the error occur at auto_quant.run_inference() I noticed that aimet code has a module named QuantizableMultiheadAttention(nn.MultiheadAttention) at /workspace/aimet/TrainingExtensions/torch/src/python/aimet_torch/transformers/activation.py, I don't know whether this code is related to my issue, I try to change maskedfill to masked_fill, but the error still occurs, I don't know how to do now. Any suggestion will be helpful, thank you.

Francis235 commented 10 months ago

My code branch is 'develop', the newest commit ID is: 2444ab53f7389e

quic-hitameht commented 10 months ago

Hi @Francis235

Are you applying prepare_model(...) outside of AutoQuant(...) API? The reason why I am asking this because AutoQuant internally tries to prepare the model before applying PTQ techniques.