Closed gitgithan closed 3 years ago
1-3: thanks for these.
# Get conv outputs
interpretable_model.eval()
conv_outputs = []
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Forward pass w/ inputs
inputs, targets = batch[:-1], batch[-1]
z = interpretable_model(inputs)
# Store conv outputs
conv_outputs.extend(z)
conv_outputs = np.vstack(conv_outputs)
print (conv_outputs.shape) # (len(filter_sizes), num_filters, max_seq_len)
Hi,
I have been part of this thread and have no idea how to end these emails.
Any idea?
Thanks!
On Tue, 19 Oct 2021 at 19:25, Goku Mohandas @.***> wrote:
1-3: thanks for these.
- You usually want to do inference on CPU/TPU (optimized for forward pass) so you don't waste a machine with a GPU (which is usually reserved for training with backprop).
- good catch, I've updated the code on the webpage to look like this now:
Get conv outputsinterpretable_model.eval()conv_outputs = []with torch.inference_mode():
for i, batch in enumerate(dataloader): # Forward pass w/ inputs inputs, targets = batch[:-1], batch[-1] z = interpretable_model(inputs) # Store conv outputs conv_outputs.extend(z)
conv_outputs = np.vstack(conv_outputs)print (conv_outputs.shape) # (len(filter_sizes), num_filters, max_seq_len)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/GokuMohandas/MadeWithML/issues/203#issuecomment-946939380, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXUXQ2SU6CV2M3SHLLZZEDUHWSYRANCNFSM5GIT5CAQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
hey @JellenRoberts , I'm sorry but I'm not entirely sure either but I agree that we don't have to open issues this frequently. @gitgithan I sent you a LinkedIn request so let's chat on there for all future minor issues and clarifications because I think more than a thousand people are watching this repo so they are actively getting emails for every single conversation. We can share any large implications here for the whole community as we come across them.
3. We'll apply convolution via filters (filter_size, vocab_size, num_filters)
should beembedding_dim
to replacevocab_size
?first have to decice
padding our inputs before convolution to result is outputs
is
should bein
device = torch.device("cpu")
moves things back to cpu.interpretable_trainer.predict_step(dataloader)
breaks withAttributeError: 'list' object has no attribute 'dim'
. The precise step isF.softmax(z)
, where for interpretable_model, z is a list of 3 items and it was trying to softmax a list instead of a tensor.