Closed shashankvasisht closed 2 years ago
Tokenizer
class), unless it's configured otherwise. But you made me realize that we should pass in the index to pad_sequences
instead of assuming it's 0. These kinds of mismatches lead to silent bugs! I'll push this change towards the end of this month.__init__
function for InterpretableCNN
, it has all the layers. The only difference is that we're returning an earlier artifact in the forward
function.pad_sequences
function, we force PAD to be zeros.
Hi, Thank you for such excellent lessons!!!
I had 3 doubts in the lecture, can you please explain them:
When we pad the one-hot sequences to max number of seq length, why do we not put 1 at the 0th index? (so as to make it to correspond to < pad > token) Why is it currently all zeros ?
When we're loading the weights in the interpretableCNN model, why dont we get the weight mis-match error ? (as we have dropped the FC layer part and we're also not using strict=False )
My sns heatmap / conv_output have all the values 1 . It does not resemble yours...Can you help me with this?