nok-halfspace / Transformer-Time-Series-Forecasting

373 stars 102 forks source link

Problem in training loop #5

Closed juanantoniobellido closed 2 years ago

juanantoniobellido commented 3 years ago

I have found something which is weird to me. The code below can be found in "_train_teacherforcing.py" and "_train_withsampling.py"

    for index_in, index_tar, _input, target, sensor_number in dataloader:

    #Shape of _input : [batch, input_length, feature]
    #Desired input for model: [input_length, batch, feature]

    optimizer.zero_grad()
    src = _input.permute(1,0,2).double().to(device)[:-1,:,:] # torch.Size([47, 1, 7])
    target = _input.permute(1,0,2).double().to(device)[1:,:,:] # src shifted by 1.

As you can see, the target variable comes from _input, while the target from dataloader is lost in this assignment and not used anymore. Does it make sense?

Thank you very much in advance

nok-halfspace commented 2 years ago

Thanks for your question.

In the dataloader, I return the input to be from start_index : start_index + training_length and target to be from start_index + training_length : start_index + training_length + forecast_window.

During teacher forcing I am only predicting one upcoming value at a time. Thus the target is just the src shifted by 1.