Closed pleomax0730 closed 4 months ago
Thanks for the comment.
Yes, the vocab_size
means the number of item tokens.
For the sequence length, it doesn't significantly matter, if it's enough, like 10 or 20 or 50 or so, but for MovieLens, use 100.
Epoch should be "enough", determined by tensorboard plot, but mostly started with 200.
Hi @theeluwin , Could you provide the hyperparameters of epoch and sequence length in order to reproduce your result?
Furthermore, what's the vocab_size means in the code? The max item token in train dataset?
Thanks for the great paper!