karpathy / makemore

An autoregressive character-level language model for making more things
MIT License
2.47k stars 652 forks source link

Question about MLP #9

Open isentropic opened 1 year ago

isentropic commented 1 year ago

Here you are padding the tensor with special starting token. It looks strange to me that you are doing it inside the embedding. Isn't this strange? Aren't you supposed to first pass the special token through the embedding first and then add that as a padding?

tok_emb = self.wte(idx) # token embeddings of shape (b, t, n_embd)
idx = torch.roll(idx, 1, 1)
# something like this instead?
idx[:, 0] = self.wte(self.vocab_size) # special <BLANK> token

embs.append(tok_emb)