Open AdrianAlan opened 1 year ago
Hey, I have a question regarding the QBatchNorm implementation. I think I must be overlooking something, but are these used anywhere?
Perhaps the QBN parameters could be directly quantized here before scale and bias calculation?
class BatchNormalization(Layer): _expected_attributes = [ Attribute('n_in'), Attribute('n_filt', default=0), WeightAttribute('scale'), WeightAttribute('bias'), TypeAttribute('scale'), TypeAttribute('bias'), ] def initialize(self): inp = self.get_input_variable() shape = inp.shape dims = inp.dim_names self.add_output_variable(shape, dims) gamma = self.model.get_weights_data(self.name, 'gamma') beta = self.model.get_weights_data(self.name, 'beta') mean = self.model.get_weights_data(self.name, 'moving_mean') var = self.model.get_weights_data(self.name, 'moving_variance') if self.get_attr('gamma_quantizer'): gamma = self.get_attr('gamma_quantizer')(gamma) beta = self.get_attr('beta_quantizer')(beta) mean = self.get_attr('mean_quantizer')(mean) var = self.get_attr('variance_quantizer')(var) scale = gamma / np.sqrt(var + self.get_attr('epsilon')) bias = beta - mean * scale self.add_weights_variable(name='scale', var_name='s{index}', data=scale) self.add_weights_variable(name='bias', var_name='b{index}', data=bias)
Hey, I have a question regarding the QBatchNorm implementation. I think I must be overlooking something, but are these used anywhere?
Perhaps the QBN parameters could be directly quantized here before scale and bias calculation?