-
![QQ图片20240714011506](https://github.com/user-attachments/assets/19c84688-8322-4893-823b-d8daecfe847b)
(calm) (base) penghuan@ubuntu:~/code/SimSGT/regression$ sh script/pretrain_GEOM.sh
add args
…
-
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_pretrained('./model/hand-write/')
模型目录包含pytorch_model.bin和config文件
换成mindnlp.transformers后,没有VisionEncode…
-
Hi,
I found the model size mismatch with checkpoint.
size mismatch for encoder.block.0.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shap…
-
Hey there!
Thanks a lot for amazing work and making it public.
Unfortunately when i tried to run the code on colab, i got the following error:
------------------------------------------------------…
-
Will there be added support for encoder-decoder models, like T5 or BART? All of the currently supported models are decoder-only.
-
Traceback (most recent call last):
File "/home/lpl/muavic/demo/run_demo.py", line 220, in
AV_RESOURCES = load_av_models(args.av_models_path)
File "/home/lpl/muavic/demo/demo_utils.py", lin…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
- Choose an encoder model to process the input into an embedding
- Choose a decoder model to process the embedding into the output
-
RuntimeError: Error(s) in loading state_dict for VLT5VRDCaption:
size mismatch for encoder.block.0.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([768, 3072]) from ch…
-
**Describe the bug**
I can successfully run my inference code on the default megamolbart.nemo, but as soon as I run any kind of fine tuning on it then I get the error RuntimeError: Error(s) in loadin…