Glaciohound / Chimera-ST

A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021
MIT License
46 stars 9 forks source link

wav2vec 2.0模型地址失效 #3

Closed hannlp closed 2 years ago

hannlp commented 2 years ago

您好,感谢开源模型代码!我在复现时遇到了一点问题: 代码中脚本下载w2v2.0模型的地址: http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera 似乎已经失效,我自己在: https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md 下载了官方所提供的模型: https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt 但是发现不匹配,报错如下:

RuntimeError: Error(s) in loading state_dict for Wav2Vec2Model:
        Missing key(s) in state_dict: "mask_emb", "feature_extractor.conv_layers.0.0.weight", "feature_extractor.conv_layers.0.2.weight", "feature_extractor.conv_layers.0.2.bias", "feature_extractor.conv_layers.1.0.weight", "feature_extractor.conv_layers.2.0.weight", "feature_extractor.conv_layers.3.0.weight", "feature_extractor.conv_layers.4.0.weight", "feature_extractor.conv_layers.5.0.weight", "project_q.weight", "project_q.bias", "encoder.pos_conv.0.bias", "encoder.pos_conv.0.weight_g", "encoder.pos_conv.0.weight_v", "encoder.layers.0.self_attn.k_proj.weight", "encoder.layers.0.self_attn.k_proj.bias", "encoder.layers.0.self_attn.v_proj.weight", "encoder.layers.0.self_attn.v_proj.bias", "encoder.layers.0.self_attn.q_proj.weight", "encoder.layers.0.self_attn.q_proj.bias", "encoder.layers.0.self_attn.out_proj.weight", "encoder.layers.0.self_attn.out_proj.bias", "encoder.layers.0.self_attn_layer_norm.weight", "encoder.layers.0.self_attn_layer_norm.bias", "encoder.layers.0.fc1.weight", "encoder.layers.0.fc1.bias", "encoder.layers.0.fc2.weight", "encoder.layers.0.fc2.bias", "encoder.layers.0.final_layer_norm.weight", "encoder.layers.0.final_layer_norm.bias", "encoder.layers.1.self_attn.k_proj.weight", "encoder.layers.1.self_attn.k_proj.bias", "encoder.layers.1.self_attn.v_proj.weight", "encoder.layers.1.self_attn.v_proj.bias", "encoder.layers.1.self_attn.q_proj.weight", "encoder.layers.1.self_attn.q_proj.bias", "encoder.layers.1.self_attn.out_proj.weight", "encoder.layers.1.self_attn.out_proj.bias", "encoder.layers.1.self_attn_layer_norm.weight", "encoder.layers.1.self_attn_layer_norm.bias", "encoder.layers.1.fc1.weight", "encoder.layers.1.fc1.bias", "encoder.layers.1.fc2.weight", "encoder.layers.1.fc2.bias", "encoder.layers.1.final_layer_norm.weight", "encoder.layers.1.final_layer_norm.bias", "encoder.layers.2.self_attn.k_proj.weight", "encoder.layers.2.self_attn.k_proj.bias", "encoder.layers.2.self_attn.v_proj.weight", "encoder.layers.2.self_attn.v_proj.bias", "encoder.layers.2.self_attn.q_proj.weight", "encoder.layers.2.self_attn.q_proj.bias", "encoder.layers.2.self_attn.out_proj.weight", "encoder.layers.2.self_attn.out_proj.bias", "encoder.layers.2.self_attn_layer_norm.weight", "encoder.layers.2.self_attn_layer_norm.bias", "encoder.layers.2.fc1.weight", "encoder.layers.2.fc1.bias", "encoder.layers.2.fc2.weight", "encoder.layers.2.fc2.bias", "encoder.layers.2.final_layer_norm.weight", "encoder.layers.2.final_layer_norm.bias", "encoder.layers.3.self_attn.k_proj.weight", "encoder.layers.3.self_attn.k_proj.bias", "encoder.layers.3.self_attn.v_proj.weight", "encoder.layers.3.self_attn.v_proj.bias", "encoder.layers.3.self_attn.q_proj.weight", "encoder.layers.3.self_attn.q_proj.bias", "encoder.layers.3.self_attn.out_proj.weight", "encoder.layers.3.self_attn.out_proj.bias", "encoder.layers.3.self_attn_layer_norm.weight", "encoder.layers.3.self_attn_layer_norm.bias", "encoder.layers.3.fc1.weight", "encoder.layers.3.fc1.bias", "encoder.layers.3.fc2.weight", "encoder.layers.3.fc2.bias", "encoder.layers.3.final_layer_norm.weight", "encoder.layers.3.final_layer_norm.bias", "encoder.layers.4.self_attn.k_proj.weight", "encoder.layers.4.self_attn.k_proj.bias", "encoder.layers.4.self_attn.v_proj.weight", "encoder.layers.4.self_attn.v_proj.bias", "encoder.layers.4.self_attn.q_proj.weight", "encoder.layers.4.self_attn.q_proj.bias", "encoder.layers.4.self_attn.out_proj.weight", "encoder.layers.4.self_attn.out_proj.bias", "encoder.layers.4.self_attn_layer_norm.weight", "encoder.layers.4.self_attn_layer_norm.bias", "encoder.layers.4.fc1.weight", "encoder.layers.4.fc1.bias", "encoder.layers.4.fc2.weight", "encoder.layers.4.fc2.bias", "encoder.layers.4.final_layer_norm.weight", "encoder.layers.4.final_layer_norm.bias", "encoder.layers.5.self_attn.k_proj.weight", "encoder.layers.5.self_attn.k_proj.bias", "encoder.layers.5.self_attn.v_proj.weight", "encoder.layers.5.self_attn.v_proj.bias", "encoder.layers.5.self_attn.q_proj.weight", "encoder.layers.5.self_attn.q_proj.bias", "encoder.layers.5.self_attn.out_proj.weight", "encoder.layers.5.self_attn.out_proj.bias", "encoder.layers.5.self_attn_layer_norm.weight", "encoder.layers.5.self_attn_layer_norm.bias", "encoder.layers.5.fc1.weight", "encoder.layers.5.fc1.bias", "encoder.layers.5.fc2.weight", "encoder.layers.5.fc2.bias", "encoder.layers.5.final_layer_norm.weight", "encoder.layers.5.final_layer_norm.bias", "encoder.layer_norm.weight", "encoder.layer_norm.bias", "layer_norm.weight", "layer_norm.bias", "final_proj.weight", "final_proj.bias". 
        Unexpected key(s) in state_dict: "w2v_encoder.proj.weight", "w2v_encoder.proj.bias", "w2v_encoder.w2v_model.feature_extractor.conv_layers.0.0.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.0.2.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.0.2.bias", "w2v_encoder.w2v_model.feature_extractor.conv_layers.1.0.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.2.0.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.3.0.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.4.0.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.5.0.weight", "w2v_encoder.w2v_model.feature_extractor.conv_layers.6.0.weight", "w2v_encoder.w2v_model.encoder.pos_conv.0.bias", "w2v_encoder.w2v_model.encoder.pos_conv.0.weight_g", "w2v_encoder.w2v_model.encoder.pos_conv.0.weight_v", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.0.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.0.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.0.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.0.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.0.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.0.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.0.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.0.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.0.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.1.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.1.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.1.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.1.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.1.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.1.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.1.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.1.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.1.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.2.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.2.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.2.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.2.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.2.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.2.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.2.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.2.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.2.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.3.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.3.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.3.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.3.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.3.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.3.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.3.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.3.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.3.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.4.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.4.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.4.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.4.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.4.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.4.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.4.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.4.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.4.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.5.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.5.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.5.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.5.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.5.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.5.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.5.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.5.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.5.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.6.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.6.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.6.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.6.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.6.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.6.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.6.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.6.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.6.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.7.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.7.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.7.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.7.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.7.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.7.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.7.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.7.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.7.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.8.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.8.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.8.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.8.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.8.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.8.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.8.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.8.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.8.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.9.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.9.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.9.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.9.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.9.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.9.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.9.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.9.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.9.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.10.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.10.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.10.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.10.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.10.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.10.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.10.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.10.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.10.final_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.k_proj.weight", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.k_proj.bias", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.v_proj.weight", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.v_proj.bias", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.q_proj.weight", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.q_proj.bias", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.out_proj.weight", "w2v_encoder.w2v_model.encoder.layers.11.self_attn.out_proj.bias", "w2v_encoder.w2v_model.encoder.layers.11.self_attn_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.11.self_attn_layer_norm.bias", "w2v_encoder.w2v_model.encoder.layers.11.fc1.weight", "w2v_encoder.w2v_model.encoder.layers.11.fc1.bias", "w2v_encoder.w2v_model.encoder.layers.11.fc2.weight", "w2v_encoder.w2v_model.encoder.layers.11.fc2.bias", "w2v_encoder.w2v_model.encoder.layers.11.final_layer_norm.weight", "w2v_encoder.w2v_model.encoder.layers.11.final_layer_norm.bias", "w2v_encoder.w2v_model.layer_norm.weight", "w2v_encoder.w2v_model.layer_norm.bias", "w2v_encoder.w2v_model.post_extract_proj.weight", "w2v_encoder.w2v_model.post_extract_proj.bias", "w2v_encoder.w2v_model.mask_emb", "w2v_encoder.w2v_model.encoder.layer_norm.weight", "w2v_encoder.w2v_model.encoder.layer_norm.bias".

可以提供一下文中使用的w2v2.0模型的新地址吗?十分感谢!

dqqcasia commented 2 years ago

@hannlp Hi,下载地址的url中 前缀 sf3-ttcdn-tos.pstatp.com 换成 lf3-nlp-opensource.bytetos.com 就可以了。

Glaciohound commented 2 years ago

@dqqcasia 谢谢! 我按照 @dqqcasia 的建议在 main 里 push 了,如果没有解决问题,请重开这个问题,并暂时可以使用 commit 前的版本(在 before_issue#3 branch 里)。