-
Hi!
When will the pre-trained vision model and code of data2vec be released ?
-
## ❓ Questions and Help
### Before asking:
This issue should be mentioned in data2vec v2 paper explicitly, instead of roughly explane in few phase.
So, there have no sufficient info in *document …
-
### Links
- Paper : https://arxiv.org/abs/2202.03555
- Github : https://github.com/facebookresearch/fairseq/tree/main/examples/data2vec
### 한 줄 요약
- Vision, speech, language 도메인에 masked predicti…
-
**Describe the bug**
Model I am using UniLM:
I use the following code to load the model.
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("micro…
-
I trained a data2vec-base model myself, and then used the parameters in s3prl/s3prl/downstream/voxceleb1/config.yaml to reproduce the ASV and SID task in superb benchmark, but the results are very dif…
-
【问题】
按照项目描述操作,运行prepare_kaldi_feats.sh后没有生成.tsv 结尾的文件,请问这个文件怎么生成?
【项目描述如下】
1、利用kaldi提取40维mfcc特征,运行脚本参考prepare_kaldi_feats.sh
可将运行脚本prepare_kaldi_feats.sh与参数设置mfcc_hires.conf置于kaldi任一egs目录下(与cmd.…
ghost updated
2 months ago
-
Hi, I am interested in reproducing the numbers you reported on NSynth. With the models from HuggingFace I do get close, but not quite to what you report (0.4 - 0.8 lower for the models I tried, which …
-
Hi there,
I am trying to train a audio-only data2vec 2.0 model on a custom brain signal dataset. I modified the max_sample_size and the conv layer's architecture as the data is multi-channel with a…
-
Is there any documentation or `examples` that I can refer to train a transformer model from scratch using `fairseq2`? The `examples` folder in the repository seems empty.
-
Hi authors, thank you for your code. Could you give me the link to download the VIT backbone pretrained weights. I have come to data2vec but I am not sure which one to download.