Open waterboy96 opened 1 year ago
Out of curiosity do you have a minimal example maybe that shows the issue?
Hi @PiotrCzapla
Right before the random sampling section we define fit as:
def fit():
for epoch in range(epochs):
for xb,yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
report(loss, preds, yb)
We define pred on line 4 in our loop, but we call report on preds in the last line. In most of the other fit functions it assigns the prediction to preds correctly such as the first fit function in the same notebook:
def fit():
for epoch in range(epochs):
for i in range(0, n, bs):
s = slice(i, min(n,i+bs))
xb,yb = x_train[s],y_train[s]
preds = model(xb)
loss = loss_func(preds, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
report(loss, preds, yb)
I see what you mean, it is in 04_minibatch_training just before jupyter chapter "Random sampling". :) I made a fix https://github.com/fastai/course22p2/pull/19 with github.dev it is super easy nowadays to make such changes directly in the browser even to notebooks ! :)
Thank you.
I did not know that. Thanks, I will try it next time!
Hi,
I was trying to reimplement the course material for my use case which is 1D as homework, and I had a bug running the fit function when using the PyTorch dataloaders, and I noticed that our fit function implemented right before the sampling chapter uses a variable called preds in the report function, however the predictions are stored in pred during the loop, and this lead to me having inconsistent dimensions, removing the s worked out.
Cheers