Open sharpsy opened 6 years ago
I just found https://github.com/huggingface/pytorch-openai-transformer-lm/issues/12 and https://github.com/openai/finetune-transformer-lm/issues/9
Not sure how I missed the first issue, I have looked at the open issues - but it shows that comments in this code will help with understanding.
Hi @sharpsy,
I also think that this comment would help people understand this code. Do you want to add it yourself or would you like me to create the pull request?
Ok, I'll create the pull request.
What I don't understand is why we need to start from n_vocab + n_special
not e.g. from 0. Any explanations?
Tokens from 0 to n_vocab
are tokens from the vocabulary (data), from n_vocab
to n_vocab+n_special
are special tokens: _start_
, _delimiter_
and _classify_
. All above, up to n_vocab + n_special + n_ctx
are for encoding the position.
The indices from 0 to n_vocab + n_special
are already taken in the embedding matrix by the vocabulary words (n_vocab
) and the special tokens such as _start_
, _delimiter_
and _classify_
.
thx!
Thanks for the implementation, it is a big help! I'm trying to wrap my head around the code and I had some questions here and there. One of them was the last step in the
transform_roc
function. It was not clear to me why would they augment the input sequence with another dimension containing increasing embedding ids, ids that are greater than tokens from source data vocabulary. Namely:xmb[:, :, :, 1] = np.arange(n_vocab + n_special, n_vocab + n_special + n_ctx)
The explanation is in the paper itself:
We used learned position embeddings instead of the sinusoidal version proposed in the original work.
Those are ids of position embeddings and are added to the input data in theforward()
method of theTransformerModel
. It took me some time to find this and we can help others that struggle with the same line.I would suggest adding comments to the code that will help with understanding.
In the
transform_roc
function, before the last step:# Position information that is added to input embeddings in the TransformerModel
And then in theforward()
method of theTransformerModel
before theh = e.sum(dim=2)
line:# Add position embeddings to the input embeddings