huggingface / trl

Train transformer language models with reinforcement learning.
http://hf.co/docs/trl
Apache License 2.0
10.08k stars 1.28k forks source link

Zero Training Loss while finetuning a mistral model for summarization #1225

Closed Praful932 closed 10 months ago

Praful932 commented 10 months ago

I am trying to finetune a bnb quantized model for summarization using LORA base_model - cognitivecomputations/dolphin-2.2.1-mistral-7b

I am training it for 1 epoch, the loss weirdly is at 0 from the first step, I tried both left and right padding as per this issue - #834 I am using the ChatML format that has been suggested for this model

wandb run image

Code Snippet ```python import os import gc import ctypes import torch from datasets import load_dataset from peft import prepare_model_for_kbit_training from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig, AutoConfig from datasets import load_dataset from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling, GPTQConfig, BitsAndBytesConfig from trl import DataCollatorForCompletionOnlyLM, SFTTrainer from peft import LoraConfig, get_peft_model, AutoPeftModelForCausalLM def clean_memory(): gc.collect() ctypes.CDLL("libc.so.6").malloc_trim(0) torch.cuda.empty_cache() class DatasetWrapper: def __init__(self, hf_dataset): self.hf_dataset = hf_dataset def to_chat_format(self, tokenizer, prompt_template = "Summarize : {dialogue}", input_key = "", output_key = "", system_prompt = "", add_output = True): """Converts the dataset to a chat based format""" self.hf_dataset = self.hf_dataset.map(lambda x : {"chat_format" : ([{'role' : "system", "content" : system_prompt}] if system_prompt else []) + [ { 'role' : "user", "content" : prompt_template.format(**{input_key : x[input_key]}) } ] + ([{'role' : 'assistant', "content" : x[output_key]}] if add_output else [])}) self.hf_dataset = self.hf_dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat_format"], tokenize=False, add_generation_prompt=True)}) valid_dataset = load_dataset("samsum")['validation'] valid_dataset = DatasetWrapper(valid_dataset) test_dataset = load_dataset("samsum")['test'] test_dataset = DatasetWrapper(test_dataset) model_id = "cognitivecomputations/dolphin-2.2.1-mistral-7b" tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast = True) tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}" model_config = AutoConfig.from_pretrained(model_id) valid_dataset.to_chat_format(tokenizer, input_key = "dialogue", output_key = "summary", add_output=False) test_dataset.to_chat_format(tokenizer, input_key = "dialogue", output_key = "summary", add_output = False) tokenizer.model_max_length = 1024 def get_model_for_training(): globals().pop('model', None) clean_memory() quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained(model_id,use_cache = False, torch_dtype = torch.bfloat16, quantization_config = quantization_config, device_map = "auto") model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) peft_config = LoraConfig( r=8, lora_alpha=8, target_modules=["k_proj","o_proj","q_proj","v_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, peft_config) model.print_trainable_parameters() return model, peft_config model, peft_config = get_model_for_training() tokenizer.padding_side = "right" print(tokenizer.padding_side) os.environ["WANDB_PROJECT"] = "" os.environ['WANDB_ENTITY'] = "" os.environ["WANDB_API_KEY"] = "" args = TrainingArguments( output_dir='/kaggle/working/model_2/', num_train_epochs=1, per_device_train_batch_size=4, learning_rate = 2e-5, lr_scheduler_type="constant", logging_strategy = "steps", logging_steps = 20, save_strategy = "epoch", seed = 42, report_to = 'wandb', ) max_seq_length = 1024 instruction_template = "<|im_start|>user\n" response_template = "<|im_start|>assistant\n" # only want to train on answers, not doing lm collator = DataCollatorForCompletionOnlyLM( response_template=response_template, instruction_template=instruction_template,tokenizer=tokenizer ) trainer = SFTTrainer( model=model, train_dataset=valid_dataset.hf_dataset, peft_config=peft_config, max_seq_length=max_seq_length, tokenizer=tokenizer, packing=False, # False for completion only LM collator dataset_text_field = 'formatted_chat', data_collator=collator, args=args, ) trainer.train() ```

System Info

- `transformers` version: 4.36.2
- Platform: Linux-5.15.133+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config:    not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no

cc @younesbelkada @lvwerra @lvwerra

Praful932 commented 10 months ago

was able to resolve this, was a bug in my code, i wasn't passing the assistant completion in the dataset that was passed to the model, so the model had nothing to compute loss for

thanks, closing this issue

aabhasgupta commented 5 months ago

@Praful932, can you share how were you able to resolve the issue. I am still getting Nan as the loss.

Praful932 commented 4 months ago

@aabhasgupta I just modified the correct line above valid_dataset.to_chat_format(tokenizer, input_key = "dialogue", output_key = "summary", add_output=True) note add_output=True