-
Thanks for sharing, it's very interesting, I also want to make a .npy file. Follow your instructions to perform the installation step by step, making no mistakes until the last step.
My error message…
-
param error use imitate_episodes.py to train model.
```
TypeError: forward() got an unexpected keyword argument 'src_key_padding_mask'
TypeError: forward() got an unexpected keyword argument 'pos…
-
-
在finetune.sh中增加以下参数,但是并没有从保存的最后一个checkpoint继续训练,请问怎么实现finetune中的继续训练呢?
--resume_from_checkpoint True \
-
I used this code to print all the named parameters:
for name, module in model.named_parameters():
print(name)
And this is the output:
transformer.level_embed
transformer.encoder.layers.0.…
-
在训练一开始加载512-inpainting-ema.ckpt时,发现pertrained weight和model很多权重没有成功加载。请问这个是正常的吗?
```
Restored from ./checkpoints/pretrained/512-inpainting-ema.ckpt with 508 missing and 420 unexpected keys
Missing…
kxqt updated
6 months ago
-
Hi Ankit,
great work on C5_W4_A1_Transformer_Subclass_v1. But I have a problem in the Encoder and Decoder part. I went about with the same approach as you. But I get an error saying wrong values.
Wo…
-
### Deep Learning Simplified Repository (Proposing new issue)
:red_circle: **Project Title** : Automated Legal Document Summarizer
:red_circle: **Aim** : Create a model that can read and summarize…
-
```
RuntimeError: Error(s) in loading state_dict for LoRANetwork:
Missing key(s) in state_dict: "lora_te_text_model_encoder_layers_0_self_attn_k_proj.alpha",
"lora_te_text_model_encoder_layers_0_…
-
## 一言でいうと
VQAのような、画像+言語のタスクでTransformerを適用した研究。画像は物体領域の位置ベクトルを使ってSelf-Attention(物体間の関係を学習)、言語は通常通りSelf-Attention、最後にCross(言語to画像、画像to言語のAttention計算)をした後にSelf-Attentionをとって出力を行う。事前学習を通じSOTAを達成
![…