huggingface / pytorch-openai-transformer-lm

🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
MIT License
1.51k stars 285 forks source link

Clarifying last step of the 'transform_roc' function #26

Open sharpsy opened 6 years ago

sharpsy commented 6 years ago

Thanks for the implementation, it is a big help! I'm trying to wrap my head around the code and I had some questions here and there. One of them was the last step in the transform_roc function. It was not clear to me why would they augment the input sequence with another dimension containing increasing embedding ids, ids that are greater than tokens from source data vocabulary. Namely: xmb[:, :, :, 1] = np.arange(n_vocab + n_special, n_vocab + n_special + n_ctx)

The explanation is in the paper itself: We used learned position embeddings instead of the sinusoidal version proposed in the original work. Those are ids of position embeddings and are added to the input data in the forward() method of the TransformerModel. It took me some time to find this and we can help others that struggle with the same line.

I would suggest adding comments to the code that will help with understanding.

In the transform_roc function, before the last step: # Position information that is added to input embeddings in the TransformerModel And then in the forward() method of the TransformerModel before the h = e.sum(dim=2) line: # Add position embeddings to the input embeddings

sharpsy commented 6 years ago

I just found https://github.com/huggingface/pytorch-openai-transformer-lm/issues/12 and https://github.com/openai/finetune-transformer-lm/issues/9

Not sure how I missed the first issue, I have looked at the open issues - but it shows that comments in this code will help with understanding.

rodgzilla commented 6 years ago

Hi @sharpsy,

I also think that this comment would help people understand this code. Do you want to add it yourself or would you like me to create the pull request?

sharpsy commented 6 years ago

Ok, I'll create the pull request.

teucer commented 6 years ago

What I don't understand is why we need to start from n_vocab + n_special not e.g. from 0. Any explanations?

sharpsy commented 6 years ago

Tokens from 0 to n_vocab are tokens from the vocabulary (data), from n_vocab to n_vocab+n_special are special tokens: _start_, _delimiter_ and _classify_. All above, up to n_vocab + n_special + n_ctx are for encoding the position.

rodgzilla commented 6 years ago

The indices from 0 to n_vocab + n_special are already taken in the embedding matrix by the vocabulary words (n_vocab) and the special tokens such as _start_, _delimiter_ and _classify_.

teucer commented 6 years ago

thx!