In the inference stage, it seems that the mean and standard deviation used for standardization are extracted from the model instead of calculating the input input_tensor, so the normalized result is not a distribution with a mean of 0 and a standard deviation of 1. , which is inconsistent with the description in the first paragraph of the appendix of the paper. Did I miss something?
/model/BaseModel.py line 37
# standarize input
def standardizeInput(self, input_tensor):
return ((input_tensor - self.input_means.type_as(input_tensor)) / self.input_scales.type_as(input_tensor))
# standarize output
def standardizeOutput(self, output_tensor):
return ((output_tensor - self.output_means.type_as(output_tensor)) / self.output_scales.type_as(output_tensor))
# Add scale and means to output
def destandardizeInput(self, input_tensor):
return (input_tensor * self.input_scales.type_as(input_tensor) + self.input_means.type_as(input_tensor))
# Add scale and means to output
def destandardizeOutput(self, predictions):
return (predictions * self.output_scales.type_as(predictions) + self.output_means.type_as(predictions))
In the inference stage, it seems that the mean and standard deviation used for standardization are extracted from the model instead of calculating the input input_tensor, so the normalized result is not a distribution with a mean of 0 and a standard deviation of 1. , which is inconsistent with the description in the first paragraph of the appendix of the paper. Did I miss something? /model/BaseModel.py line 37