Open FlorentF9 opened 5 years ago
I see in the code that quantization error is defined using a L1 norm (mean absolute error), and not a euclidean distance :
def calculate_quantization_error(self): neuron_values = self.codebook.matrix[self.find_bmu(self._data)[0].astype(int)] quantization_error = np.mean(np.abs(neuron_values - self._data)) return quantization_error
Traditionally, quantization error is defined by the mean euclidean distance between samples and BMUs (see papers or other SOM libraries such as Matlab SOM-Toolbox). Shouldn't it be:
def calculate_quantization_error(self): neuron_values = self.codebook.matrix[self.find_bmu(self._data)[0].astype(int)] quantization_error = np.mean(np.sqrt(np.sum(np.square(neuron_values - self._data), axis=1))) return quantization_error
I see in the code that quantization error is defined using a L1 norm (mean absolute error), and not a euclidean distance :
Traditionally, quantization error is defined by the mean euclidean distance between samples and BMUs (see papers or other SOM libraries such as Matlab SOM-Toolbox). Shouldn't it be: