Open Siamw opened 2 years ago
There is currently no support, but you can add kernel-wise LSQ as follows:
w_reshape = self.weight.reshape([self.weight.shape[0], -1]).transpose(0, 1)
# quantize w_reshape
# ...
w_q = w_reshape_q.transpose(0, 1).reshape(self.weight.shape).detach() + self.weight - self.weight.detach()
thanks! :D
how can i use kernel-wise quantization ? (Qmodes) Does it work at your code?