-
as soft-vc showed quite good performance in crosslingual setting, could you provide some sample in cross-lingual setting? e.g) french -> english, japanese -> english
-
Hi, I tried to run the following:
```python
from mteb import MTEB
from sentence_transformers import SentenceTransformer
evaluation = MTEB()
task_names = [t.metadata_dict["name"] for t in MTEB…
-
**Describe the bug**
When I run the deepspeed inference for BLOOM I get stuck with `Caught signal 7 (Bus error: nonexistent physical address)`.
**To Reproduce**
This is what I receive when I run …
-
hello, i try to inference with the sample audio provided in your website(https://starganv2-vc.github.io/), but cannot reproduce the converted audio as you did. And I use the pretrained-model you prov…
-
The checkpoint sunder cvc-whispers-three-emo-loss are not right!
-
Use m2m100, miriammt or any other translation model to conver the non-english portions of xp3 to english. While some or most of these already have their english counterpart, the translation back will …
-
Hi,
I tried to write a script to evaluate Google's "universal sentence encoder - 4" embedding model. I'm using the STS22 dataset.
I took the "run_array_openaiv2.py" script and changed it.
I c…
-
Crosslingual QA dataset for Bengali and Telugu
https://arxiv.org/pdf/2010.11856.pdf
https://nlp.cs.washington.edu/xorqa/
-
Is there publication for this work?
-
Please add displacy module to visualise