-
Hi, you have done great work, can you please share the embedding file of your pretrained model. thanks
-
With reference of these codes: examples/training_multilingual/make_multilingual.py, I want to have a french model, but encountered this when loading:
word_embedding_model = models.Transformer(stude…
-
### 🐛 Describe the bug
I use the following script to translate Llama-2-7b-hf to MLIR. It failed in the translation to torchscript.
```python
from transformers import AutoTokenizer, LlamaForCausal…
-
@thinhlpg
Thanks for your meaningful project.
Can you share me script to download data from huggingface, then make train data for xtts model?
I access the url of dataset but they are parquet files…
-
I download the resnet 18 pretrained model(R18 | Glint360K | 72.07) for face encoding or face embedding or anything we call it, and it is onnx format. I do not know how to preprocess the aligned face i…
-
## ❓ Questions and Help
#### What is your question?
Hello.I want to do the job somewhat similar to the task in the paper "Generative Spoken Language Modeling from Raw Audio". I want to get CPC e…
-
prefix_dim = 640 in the pretrained model. But how to translate CLIP's 512 embedding into 640 before forwarding to the net?
-
首先,感谢上海人工智能实验室及其成员对书生模型、代码框架、技术经验的分享!
请问,internlm2_20b_qlora_msagent_react_e3_gpu8训练时如何添加自己的词汇表呢?
比如:breed_name、area_name等,当做一个token。
谢谢!
-
Hi,
is there any pretrained multilingual model (for sentence embedding) with a Max Sequence Length > 128 (e.g. 256 or 512)?
`distiluse-base-multilingual-cased-v1`, `distiluse-base-multilingual-c…
-
LlamaParallelizer today doesn't support a model modified by Peft/lora API. Please, add support for TP parallelizer to handle llama2 peft/lora models for faster fine-tuning jobs.
This can start with…