Helsinki-NLP / Opus-MT

Open neural machine translation models and web services
MIT License
574 stars 71 forks source link

Helsinki-NLP/opus-mt-tc-big-he-en nonsensical translation and the speed of the translation generally. #88

Open mo-shahab opened 11 months ago

mo-shahab commented 11 months ago

The hebrew to english model outputs really is nonsensical in a way.

* and the translated text is

Gen Gen Terrorism Terrorism Terrorism Terrorism Terrorism Terrorism Cookie Cookie discussions Cookie discussions Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions assembly assembly assembly assembly assembly assembly assembly assembly assembly assembly assembly assembly assembly assembly Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie Cookie acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg acknowledg discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions discussions

* i am using the models directly this is the code snippet i am using to do the translation, and to brief about the process of the script below, it is that, that i am reading the text from a .txt file which is then translated and stored in the output .txt file, 
* the texts that i am using are meaningful text articles, actually data from the production, so the original text is legitimate
* the code snippet that i am using for the translation 

def translate_text_file(input_filename, output_filename):

Load tokenizer and model

model_name = fetch_model_name(source_language, target_language)

if "tc-big" in model_name:
    tokenizer = MarianTokenizer.from_pretrained(model_name)
    model = MarianMTModel.from_pretrained(model_name)
    with open(input_filename, "r", encoding="utf-8") as file:
        input_text = file.read()

    # translate text
    inputs = tokenizer(
        input_text, return_tensors="pt", padding=True, truncation=True
    )

    with torch.no_grad():
        outputs = model.generate(**inputs)

    translated_text = [
        tokenizer.decode(t, skip_special_tokens=True) for t in outputs
    ]

    # Save translated text to output file
    with open(output_filename, "w", encoding="utf-8") as file:
        for translation in translated_text:
            file.write(translation + "\n")


* this code is a part of script, here. as far as i know i have written this script as documented. there is a possibility where the way in which i am trying to translate might be wrong for the model.
* but right now, with this script the generated translation is not proper.

## slow translation of models
* is there any factor of hardware that might affect the speed of the process.
* what is the general speed in which the opus-mt-<src_lang>-<target-lang> model translates in 
* these are the snapshots that i took after i timed the executions
* this one is timed when translating hebrew to english
![Screenshot (110)](https://github.com/Helsinki-NLP/Opus-MT/assets/98043363/fc01e509-6309-4979-a4b0-06c7e1d37bc6)
* `Execution time: 172 seconds`
* this one is timed for russian to english translation
![Screenshot (111)](https://github.com/Helsinki-NLP/Opus-MT/assets/98043363/c03732d5-8e54-4cd4-821d-97fa3a70a447)
* `Execution time: 22 seconds`
* these models are built on marian nmt which are heavily dependent on the hardware, so if the speed of the translation models is dependent on the hardware, what would be the speed of the translation in the general machine
* and looking at the results, the time it took to translate hebrew to english is too much. and though being patient the result was not fruitful
droussis commented 10 months ago

This seems to be the case with all their models which originate from Tatoeba Challenge. Only the models which are included here seem to work using Hugging Face. Up until a month ago, I hadn't encountered such problems.

Probably that's why the translation time is too slow! The ru-en model must be one of the older models which are still working.

Hope that helps, but I haven't found any fixes!

mo-shahab commented 10 months ago

Yes this narrows some things down, I am not really sure what Tatoeba challenge is though. Here in this thread the author of the thread explains the possible problem. Hope this may help you

yeah i solved the problem, it's mainly a problem in the sampling/decoding, the default sampling approach to all of the models is mainly greedy search, this article is very helpful and will help you learn more on how to sample/decode your generated text https://huggingface.co/blog/how-to-generate

jorgtied commented 6 months ago

tatoeba challenge models are trained on this data compilation: https://github.com/Helsinki-NLP/Tatoeba-Challenge/ For speed I recommend to use the native Marian-NMT models and not the pytorch versions from the transformer library. Alternatively, you can also convert to ctranslate2 for fast decoding.

Otherwise, is the output still broken when using the transfomer models? I think this has been fixed, hasn't it? Otherwise, it would be a question to ask at the huggingface repositories.