Open TDL77 opened 1 year ago
Hi,
RevIN is a flexible, end-to-end trainable layer that can be applied to any arbitrarily chosen layers, effectively suppressing non-stationary information (mean and variance of the instance) in one layer and restoring it in another layer at a virtually symmetric position, e.g., input and output layers.
In your case, I think RevIN is not applied to the "symmetric position", because the input inputs
is normalized and the hidden space vector x
is denormalized.
Also, you should check whether the distribution shift problem occurs in your dataset before adopting RevIN.
Hi,
RevIN is a flexible, end-to-end trainable layer that can be applied to any arbitrarily chosen layers, effectively suppressing non-stationary information (mean and variance of the instance) in one layer and restoring it in another layer at a virtually symmetric position, e.g., input and output layers.
In your case, I think RevIN is not applied to the "symmetric position", because the input
inputs
is normalized and the hidden space vectorx
is denormalized.Also, you should check whether the distribution shift problem occurs in your dataset before adopting RevIN.
Thank you very much for your reply.can ask you for help.
Please tell me, did I get it right here? ..for some reason, the results even got worse.
import torch.nn as nn import torch.optim as optim
class MultipleRegression(nn.Module): def init(self, num_features): super(MultipleRegression, self).init()