facebookresearch / seamless_communication

Foundational Models for State-of-the-Art Speech and Text Translation
Other
10.94k stars 1.06k forks source link

Why always Downloading the tokenizer of seamlessM4T_v2_large #409

Open Longleaves opened 7 months ago

Longleaves commented 7 months ago

I already set up CHECKPOINTS_PATH and cards, but why always Downloading the tokenizer of seamlessM4T_v2_large when I python app.py? Please help, thanks. 图片 图片 图片

zrthxn commented 7 months ago

If I understand correctly, it looks like you're using snapshot_download. If you just load the model or tokenizer directly, the cached files will be used once downloaded.

from seamless_communication.models.unity import (
    load_unity_model,
    load_unity_text_tokenizer,
    load_unity_unit_tokenizer
)

model = load_unity_model(model_name_or_card)
tokenizer = load_unity_unit_tokenizer(model_name_or_card)
tokenizer = load_unity_text_tokenizer(model_name_or_card)

Here model_name_or_card = "seamlessM4T_v2_large"

amirmfarzane commented 4 months ago

If I understand correctly, it looks like you're using snapshot_download. If you just load the model or tokenizer directly, the cached files will be used once downloaded.

from seamless_communication.models.unity import (
    load_unity_model,
    load_unity_text_tokenizer,
    load_unity_unit_tokenizer
)

model = load_unity_model(model_name_or_card)
tokenizer = load_unity_unit_tokenizer(model_name_or_card)
tokenizer = load_unity_text_tokenizer(model_name_or_card)

Here model_name_or_card = "seamlessM4T_v2_large"

How load checkpoints that i got from fine-tuning.

avidale commented 4 months ago

How load checkpoints that i got from fine-tuning.

You can start by loading the original model (e.g. seamlessM4T_v2_large) from its card, and then use the function load_checkpoint (src/seamless_communication/cli/m4t/evaluate/evaluate.py#L365) to update the model from your fine-tuned checkpoint.

Also, please take a look at the excellent note from Alisamar Husain about fine-tuning M4T models.

amirmfarzane commented 4 months ago

How load checkpoints that i got from fine-tuning.

You can start by loading the original model (e.g. seamlessM4T_v2_large) from its card, and then use the function load_checkpoint (src/seamless_communication/cli/m4t/evaluate/evaluate.py#L365) to update the model from your fine-tuned checkpoint.

Also, please take a look at the excellent note from Alisamar Husain about fine-tuning M4T models.

Thank you very much.

RRThivyan commented 1 month ago

Hi, I have finetuned the model using the notes from Alisamar, but the model is not able to be loaded, as its throwing error that some weights are missing. final_proj.weights missing. I modified the seamlessm4t_v2_large.yaml to my model checkpoint, but getting this error. does finetune models have different weights compared to original model?

amirmfarzane commented 1 month ago

Hi, I have finetuned the model using the notes from Alisamar, but the model is not able to be loaded, as its throwing error that some weights are missing. final_proj.weights missing. I modified the seamlessm4t_v2_large.yaml to my model checkpoint, but getting this error. does finetune models have different weights compared to original model?

If you're having trouble loading checkpoints saved after fine-tuning, you can use the load_checkpoint function in the mini-evaluation section of this notebook.

RRThivyan commented 1 month ago

Hi, I followed the steps you mentioned. But as I said, its throwing error at final_proj.weight. This is my query. does the finetuned model weights differ from original model? If so how can we use our finetuned model?

m4t_evaluate \ --model_name seamlessM4T_large \ --task ASR \ --tgt_lang eng \ --data_file /home/jupyter/myfiles/fleurs/test/test_manifest.json \ --output_path eval \ --n_samples 2000

image