huggingface / alignment-handbook

Robust recipes to align language models with human and AI preferences
https://huggingface.co/HuggingFaceH4
Apache License 2.0
4.54k stars 393 forks source link

Tokenizer model_max_length #47

Open binarycrayon opened 10 months ago

binarycrayon commented 10 months ago

Hello,

I was seeing warning during finetuning Mistral and tracked this line here

https://github.com/huggingface/alignment-handbook/blob/main/src/alignment/model_utils.py#L71

Because Mistral's tokenizer model max length has a large number so the model_max_length set as 2048. However my training data consists sequence length longer than that, e.g. 4000 characters. Would this be a problem?

Thank you!

eek commented 10 months ago

I have the same exact question, I've changed the max_seq_length in config_lora.yaml to 4096, but I get these warnings:

Token indices sequence length is longer than the specified maximum sequence length for this model (2485 > 2048). Running this sequence through the model will result in indexing errors

ChenDRAG commented 10 months ago

Same problem here!

mlmonk commented 10 months ago

I found that it comes from here. During initialization, tokenizer does not read the max_length from the model.

As a quick hack, I was able to update it to 4096 and then reinstall alignment-handbook by doing

cd ./alignment-handbook/
python -m pip install .
bugface commented 8 months ago

@lewtun Would you be able to comment on here why the max_seq_len has been hard coded instead of reading from config? Any reason for this decision? Thanks

eryk-mazus commented 8 months ago

I'm also curious about that, especially that zephyr-7b-beta has model_max_length set to 1000000000000000019884624838656, so these models were somehow exempt from that 😐

There is max_length argument passed in config.yaml, so setting model_max_length to some hard-coded value inside the script seems pointless

Shiniri commented 4 months ago

This also threw me off and caused a bug that was unnecessarily complicated to fix. I second the notion that this snippet:

    # Set reasonable default for models without max length
    if tokenizer.model_max_length > 100_000:
        tokenizer.model_max_length = 2048

should not be there if there is a config value in the yaml. It leads to confusing results.