Closed TroyZuroske closed 2 years ago
Hi,
You need to provide one target language code for each sentence to translate, so the target_prefix
argument should have the same size as the number of sentences to translate.
Using translate_iterable
is a good approach to translate large text files on multiple GPUs. Also consider increasing the default batch_size
value if there is enough data and GPU memory.
Thanks for the quick response,
"Using translate_iterable is a good approach to translate large text files on multiple GPUs. Also consider increasing the default batch_size value if there is enough data and GPU memory."
Do you know why is only the first sentence being translated in the code I posted above?
I provided the response in my post:
You need to provide one target language code for each sentence to translate, so the target_prefix argument should have the same size as the number of sentences to translate.
So you should build target_prefix
to the same length as sentences_tokenized
, for example:
sentences_tokenized = []
target_prefix = []
for sentence in sentences:
sentences_tokenized.append(tokenizer.convert_ids_to_tokens(tokenizer.encode(sentence)))
target_prefix.append([tgt_lang])
Ah apologies, I misread your response. Let me give it a shot, thank you again.
@guillaumekln Thank you, that worked. However I was expecting it to be faster, the odd part is excluding the model load time which is excepted, the bottleneck is the
tokenizer.decode(tokenizer.convert_tokens_to_ids(result.hypotheses[0][1:])
Running the script below on a g5.12xlarge produces these times for each part: Model load time: 0:00:25.575123 seconds Tokenize time: 0:00:00.005526 seconds Translation time: 0:00:00.000003 seconds Tokenizer decode time: 0:00:05.041000 seconds
import spacy
import ctranslate2
import transformers
import time
from datetime import timedelta
src_lang="rus_Cyrl"
tgt_lang="eng_Latn"
start = time.time()
translator = ctranslate2.Translator("ct2_nllb_model", device="cuda", device_index=[0, 1, 2, 3])
end = time.time()
print(f"Model load time: {str(timedelta(seconds=end-start))} seconds")
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/nllb-200-3.3B", src_lang=src_lang)
nlp = spacy.load("ru_core_news_lg")
source = "Разделы сайта Каталог Статьи Обновления Поиск по сайту Вход для абонентов Контакты Телефон: +7 (499) 391-98-07 +7 (925) 507-63-54 E-mail: info@bnti.ru подробнее>> Версия для печатиКаталог / Информационная безопасность / Средства экстренного уничтожения информации / Средства экстренного уничтожения информации на иных носителях / INCAS Портативное устройство стирания фонограмм компакт-кассет и микрокассет. Не требует электропитания Стек-КС, Стек-КА Изделие предназначено для быстрого стирания информации на аудио и микрокассетах. Изделие обеспечивает полное стирание информации, записанной на магнитном носителе без его разрушения. После стирания носитель может быть, использован вновь. Технические характеристики: Стек- КССтек-КА Максимальная продолжительность перехода устройства в режим \"Готовность\"не более 10 с Длительность стирания информации на 1 носителе менее 1 мс Электропитание изделия220 В (10%), 50 Гц (5%) Допустимая продолжительность непрерывной работы изделия: в режиме \"Готовность\" не менее 24 ч в цикле \"Заряд\"/\"Стиран ие\" не менее 1 ч Габариты изделия 160x140x60 мм Время работы в автономном режиме нетне мен. 24 ч Время заряда встроенных аккумуляторов нет 24 ч Стек-ДС, Стек-ДА Изделие предназначено для быстрого стирания информации на компьютерных дискетах 3,5 дюйма и накопителях типа IOMEGA ZIP."
sentences = []
doc = nlp(source)
for sent in doc.sents:
sentences.append(str(sent))
start = time.time()
sentences_tokenized = []
target_prefix = []
for sentence in sentences:
sentences_tokenized.append(tokenizer.convert_ids_to_tokens(tokenizer.encode(sentence)))
target_prefix.append([tgt_lang])
end = time.time()
print(f"Tokenize time: {str(timedelta(seconds=end-start))} seconds")
start = time.time()
results = translator.translate_iterable(sentences_tokenized, batch_type="tokens", target_prefix=target_prefix)
end = time.time()
print(f"Translation time: {str(timedelta(seconds=end-start))} seconds")
start = time.time()
trans_results = []
for result in results:
#print(result)
trans_results.append(tokenizer.decode(tokenizer.convert_tokens_to_ids(result.hypotheses[0][1:])))
end = time.time()
print(f"Tokenizer decode time: {str(timedelta(seconds=end-start))} seconds")
print(trans_results)
Does this make sense? Is it possible to speed up the converter and decoder?
The "Translation time" is incorrect. translate_iterable
returns a generator and you should iterate over it to actually run the translation (the method is designed for streaming translation). In your example the "Tokenizer decode time" is actually the translation time.
Regarding the model load time, it can probably be optimized for multi-GPU. However, if you are using multiple GPUs it usually means you will be translating lots of data in which case the initial loading time does not matter.
One more tip since you are running on A10G GPUs, consider enabling FP16 execution with the compute_type
argument:
translator = ctranslate2.Translator("ct2_nllb_model", device="cuda", device_index=[0, 1, 2, 3], compute_type="float16")
Let me close this issue as the original error "One input stream has less examples than the others" has been explained. Feel free to open a new issue if you find another problem.
Hi All,
I am not sure if this is a bug or more of a request for an example/guidence. I am trying to use NLLB for translation at scale and use multiple GPUs for inference but I cannot figure how to do it. I think I am close but I am getting an error or not all of the translation. I am trying to break the input text into sentences which can then be treated as individual batches to the translator for batch translation across multiple GPUs in parallel. These are the two ways I am trying:
You will see I am using spaCy to separate the text into sentences, I then tokenize each sentence and append to an array. I then send the array of tokenized sentences to translate_batch but I then receive this error when calling translate_batch:
RuntimeError: One input stream has less examples than the others
The other way I have tried to this is using the translate_iterable function. Same code as above except the translation piece looks like:
This seemed to work but it only translated the first sentence from the sentences_tokenized.
Can anyone advise the correct way to batch sentences to the translator so they are translated in parallel across multiple gpus?
Thank you!