karpathy / nn-zero-to-hero

Neural Networks: Zero to Hero
MIT License
11.63k stars 1.46k forks source link

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. #31

Open Bie401 opened 1 year ago

Bie401 commented 1 year ago

Getting below error while running the backward pass:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

Code

Forward Pass

logits = (xenc @ W) counts = logits.exp() prob = counts/counts.sum(1,keepdim=True) loss = - prob[torch.arange(5),ys].log().mean() print(loss.item())

Backward Pass

W.grad=None loss.backward()

update the weights

W.data += -0.1 * W.grad

Query:

Why are performing the one-hot encoding for the input every-time while iterating the forward pass?

JaredLevi18 commented 1 year ago

Answering your question, no, in fact to begging with, we are not using one-hot encoding, based on what I've read, using embeddings is better, witch is what is done in the video, and it's done when building the vocabulary, where the char variable is, the stoi and itos variables (if I remember well their names) are basically doing the embedding part, then after that we do the the iteration first to the forward pass and then to the backward pass.