d2l-ai / d2l-en

Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
https://D2L.ai
Other
22.45k stars 4.19k forks source link

d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, 2 num_layers=2) #2605

Open achaosss opened 3 weeks ago

achaosss commented 3 weeks ago

TypeError Traceback (most recent call last) Cell In[21], line 4 1 encoder = d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, 2 num_layers=2) 3 encoder.eval() ----> 4 decoder = Seq2SeqAttentionDecoder(vocab_size=10, embed_size=8, num_hiddens=16, 5 num_layers=2) 6 decoder.eval() 7 X = torch.zeros((4, 7), dtype=torch.long) # (batch_size,num_steps)

Cell In[20], line 5 2 def init(self, vocab_size, embed_size, num_hiddens, num_layers, 3 dropout=0, kwargs): 4 super(Seq2SeqAttentionDecoder, self).init(kwargs) ----> 5 self.attention = d2l.AdditiveAttention( 6 num_hiddens, num_hiddens, num_hiddens, dropout) 7 self.embedding = nn.Embedding(vocab_size, embed_size) 8 self.rnn = nn.GRU( 9 embed_size + num_hiddens, num_hiddens, num_layers, 10 dropout=dropout)

TypeError: init() takes 3 positional arguments but 5 were given

achaosss commented 3 weeks ago

d2l-zh\pytorch\chapter_attention-mechanisms\bahdanau-attention.ipynb

achaosss commented 3 weeks ago

d2l==0.17.2 or d2l==1.0+ not work