Open enze5088 opened 5 years ago
I encountered the same problem, and after some digging, I saw that the source of the issue is coming from the _batchify method of the Data_utility class, which is located In the utils.py. The _batchify method takes as input the horizon but never uses it resulting in no effect on the size of the input/output size.
I
the horizon is the h of y(t+h+1) . _batchify use 'self.h' is equal to horizon. And the output dim is dependent on h, I think.
I encountered the same problem, and after some digging, I saw that the source of the issue is coming from the _batchify method of the Data_utility class, which is located In the utils.py. The _batchify method takes as input the horizon but never uses it resulting in no effect on the size of the input/output size.
Agreed. See the 70th line of utils.py
def _batchify(self, idx_set, horizon):
n = len(idx_set);
X = torch.zeros((n,self.P,self.m));
Y = torch.zeros((n,self.m));
for i in range(n):
end = idx_set[i] - self.h + 1;
start = end - self.P;
X[i,:,:] = torch.from_numpy(self.dat[start:end, :]);
Y[i,:] = torch.from_numpy(self.dat[idx_set[i], :]); # <- This line.
It seems the code always performs one-step forecasting no matter how you change the horizon.
Am I right?
I encountered the same problem, and after some digging, I saw that the source of the issue is coming from the _batchify method of the Data_utility class, which is located In the utils.py. The _batchify method takes as input the horizon but never uses it resulting in no effect on the size of the input/output size.
Agreed. See the 70th line of
utils.py
def _batchify(self, idx_set, horizon): n = len(idx_set); X = torch.zeros((n,self.P,self.m)); Y = torch.zeros((n,self.m)); for i in range(n): end = idx_set[i] - self.h + 1; start = end - self.P; X[i,:,:] = torch.from_numpy(self.dat[start:end, :]); Y[i,:] = torch.from_numpy(self.dat[idx_set[i], :]); # <- This line.
It seems the code always performs one-step forecasting no matter how you change the horizon.
Am I right?
You are right,this code only work for one step forecasting,it seems like the output result of One-step method have better performance.However, this is not effective in practical applications. I think change the shape of Y[ i , : ] to Y[i , output_step , : ] will be worked. : )
if set horizon ={3, 6, 12, 24}, change outputTensor and TrainYTenosr to (batchsize,horizon,Feature dimension)? But no matter how I change the value of horizon, the output is [batchsize,Feature dimension]. Should I adjust the output layer?