i read the paper and i think when we extract feature for compressing channels of one conv layer, we randomly select some patches of the input of the layer, which will generate a series of points of the output of the layer, and the patches are corresponding to the points.
but in the extract_features funtion, i think the points and the patches are selected randomly and independently, so i confused.
for name in names: #i think the names[0] is the input layer name and names[1] is the output layer name
# pad = pads[name]
shape = shapes[name]
feat = self.blobs_data(name)
if 0: print(name, self.blobs_shape(name))
if inner or len(self.blobs_shape(name))==2 or (shape[0] == 1 and shape[1] == 1):
feats_dict[name][fc_idx:(fc_idx + nPicsPerBatch)] = feat.reshape((self.num, -1))
continue
if batch >= dcfgs.nBatches and name in self.convs:
continue
# TODO!!! different patch for different image per batch
if save:
if not frozen_points or (batch, name, "randx") not in points_dict:
#embed()
randx = np.random.randint(0, shape[0]-0, nPointsPerLayer)
randy = np.random.randint(0, shape[1]-0, nPointsPerLayer)
#when the loop number comes to 1, the randx should be the same with loop number=1, right?
if dcfgs.dic.option == cfgs.pruning_options.resnet:
branchrandxy = None
branch1name = '_branch1'
branch2cname = '_branch2c'` ```
i think the names[0] is the input layer name and names[1] is the output layer name
when the loop number comes to 1, the randx should be the same with loop number=1, right?
i read the paper and i think when we extract feature for compressing channels of one conv layer, we randomly select some patches of the input of the layer, which will generate a series of points of the output of the layer, and the patches are corresponding to the points. but in the extract_features funtion, i think the points and the patches are selected randomly and independently, so i confused.