Mxbonn / INQ-pytorch

A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"
164 stars 27 forks source link

use `required_grad = False` #6

Closed xysun closed 5 years ago

xysun commented 5 years ago

Hello,

I noticed to implement INQ, you plugin

d_p.mul_(group['Ts'][idx])

in the end of SGD's step function where Ts is the binary mask from quantization. I was wondering if same thing can be achieved by just setting the quantized weights with required_grad = False (so would be in the quantization_scheduler class), then we don't need to hook into the optimizer code.

I'm wondering if you have tried this and whether that will work?

xysun commented 5 years ago

Ah disregard me. We can only set required_grad on a full tensor instead of part of it.