locuslab / TCN

Sequence modeling benchmarks and temporal convolutional networks
https://github.com/locuslab/TCN
MIT License
4.17k stars 879 forks source link

Cutting off effective history when evaluating char_cnn model. #33

Closed mok33 closed 5 years ago

mok33 commented 5 years ago

I don't get why on test time (or when evaluating the model on a validation set), we don't compute the loss on all the sequence and not only on a part of the sequence that ensures sufficient history. The model is not evaluated on the whole dataset but only on a sub-part, are the results reliable ? or even comparable to other models (LSTM, ect ) that doesn't use this method ?

jerrybai1995 commented 5 years ago

If you are referring to the language modeling tasks, then no, it wouldn't be a problem, because we enumerate the sequences by validseqlen length as well (see https://github.com/locuslab/TCN/blob/master/TCN/char_cnn/char_cnn_test.py#L92), which means we do evaluate the entire dataset. We basically are using a "shifted window" scheme:

Iteration 1: [---{------ L ---------}] Iteration 2: .................................[---{------ L ---------}] Iteration 3: ..................................................................[---{------ L ---------}]

where "{...}" contains the validseqlen elements that absorb enough history information and are used to compute the loss, and "[...]" contains the sequence fed into TCN. L is the sequence length. Not sure if the "illustration" above helps; let me know if you want some further clarifications :-)

mok33 commented 5 years ago

Hi thanks for the quick answer, thanks for the illustration, I understand clearly now :D So like you showed in the example above, only the very first (seqlen - validseqlen) of the dataset are not evaluated ?

jerrybai1995 commented 5 years ago

That is correct. You can also evaluate them (i.e., evaluate the entire first sequence, and use the validseqlen for the rest of the sequences), but it probably wouldn't affect the performance (perplexity or bpc) because it's only a very small portion of the dataset.

mok33 commented 5 years ago

Great thank you very much for the clarification, i will do that adjustment !