axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
8.02k stars 885 forks source link

Preprocess --debug does not show newline \n token if previous string is ">" but shows if I add any other letter in the role fastchat #1890

Closed Nero10578 closed 2 weeks ago

Nero10578 commented 2 months ago

Please check that this issue hasn't been reported before.

Expected Behavior

I am training phi 3.5 and I modified Fastchat in order to follow phi 3.5 chat template:

<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>

Expected behaviour is for the tokenization to show newline tokens after <|end|>

 <|end|>(-100, 32007) (-100, 29871)
(-100, 13) <|assistant|>(-100, 32001) Ori(11678, 11678)

Current Behaviour

There is no newline after <|end|>

<|end|>(-100, 32007) <|assistant|>(-100, 32001) Bol(8922, 8922)

Steps to reproduce

  1. git clone https://github.com/lm-sys/FastChat
  2. Modify fastchat/conversation.py with adding phi-3-5 chat template like so:
    # Phi3.5 template
    # reference: https://huggingface.co/microsoft/Phi-3.5-mini-instruct
    register_conv_template(
    Conversation(
        name="phi-3-5",
        system_template="<|system|>\n{system_message}<|end|>\n",
        roles=("<|user|>", "<|assistant|>"),
        sep_style=SeparatorStyle.PHI,
        sep="",
        stop_str="<|end|>",
        stop_token_ids=[0, 32007]
    )
    )
  3. Modify axolotl monkeypatch/fastchat_conversation_turns.py with adding phi-3-5 chat template like so:
    if self.sep_style == SeparatorStyle.PHI:
        if self.system_message:
            yield "", system_prompt
        for i, (role, message) in enumerate(self.messages):
            if message:
                yield f"{role}\n", f"{message.strip()}<|end|>\n"
            else:
                yield f"{role}\n", ""
        return
  4. Install modified Fastchat with pip3 install -e ".[model_worker]" --no-deps
  5. Install modified axolotl with pip3 install -e ".[flash-attn,deepspeed]" --no-deps
  6. Run python -m axolotl.cli.preprocess lora-sft.yml --debug on phi-3.5 training dataset.

Config yaml

base_model: /home/user/models/Phi-3.5-mini-instruct
tokenizer_type: AutoTokenizer

train_on_inputs: false
group_by_length: false
load_in_8bit:
load_in_4bit: false
strict: false
sequence_len: 16384
bf16: auto
fp16: 
tf32: false
flash_attention: true

shuffle_merged_datasets: true

# Data
datasets:
  - path: /home/user/datasets/dataset.jsonl
    type: sharegpt
    conversation: phi-3-5

warmup_steps: 10
dataset_prepared_path: ./lora_last_run_prepared

# Iterations
num_epochs: 1
saves_per_epoch: 8
saves_total_limit: 8

# Evaluation
val_set_size: 0.0025
eval_table_size:
eval_max_new_tokens: 128
eval_sample_packing: false
evals_per_epoch: 8

# LoRA
output_dir: ./lora_out
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
lora_modules_to_save:

save_safetensors: true

loraplus_lr_ratio: 16

# Sampling
sample_packing: true
pad_to_sequence_len: true

# Batching
gradient_accumulation_steps: 16
micro_batch_size: 1
gradient_checkpointing: unsloth
gradient_checkpointing_kwargs:
  use_reentrant: true

# wandb
wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
wandb_project: phi-3.5-mini
wandb_entity: # A wandb Team name if using a Team
wandb_watch:
wandb_name: v1-lora-16384
wandb_run_id: # Set the ID of your wandb run
wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training

# Optimizer
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00001

# Misc
early_stopping_patience:
auto_resume_from_checkpoints: true
logging_steps: 1
debug:
deepspeed:
weight_decay: 0.0
special_tokens:
  eos_token: <|end|>

Possible solution

No response

Which Operating Systems are you using?

Python Version

3.11

axolotl branch-commit

0aeb277456f0ed79ab46191a12998fccc257d414

Acknowledgements

Nero10578 commented 2 months ago

Btw Phi 3.5 trains just fine on axolotl otherwise as long as you upgrade to latest transformers.

Nero10578 commented 2 months ago

Yea I can't figure out why the newline doesn't appear after <|end|> and also if I don't set eos_token in the config to <|end|> it will keep putting <|endoftext|> on every turn.

it(372, 372) .(29889, 29889) <|end|>(32007, 32007) <|endoftext|>(32000, 32000) <|user|>(-100, 32010) User(-100, 4911) :(-100, 29901) *(-100, 334)

NanoCode012 commented 2 weeks ago

Hey, the former sounds like a weird bug.

Regarding your double EOS issue, it happens when axolotl would check the last token for the EOS and place it if not found. By setting the config to <|end|>, you satisfied that criteria. However, that is not to say the checker is wrong. I think it's because fastchat hardcodes the EOS to <|end|> which causes this issue.

Since we deprecated fastchat, could you try this dataset config instead?

type: chat_template
chat_template: phi_35
Nero10578 commented 2 weeks ago

Thanks for the reply. Yes chat_templates seems to be much simpler to use and it just works! Thanks!