rhulha / lora

Train Large Language Models (LLM) using LoRA
21 stars 5 forks source link

save_pretrained issue #1

Closed angelovAlex closed 1 year ago

angelovAlex commented 1 year ago

Hey Ray. Did you find a solution for save_pretrained issue? I am experiencing the same problem. According to stack trace it crashes simply on calling model.state_dict() because bitsandbytes tries to allocate additional memory in 'undo_layout'.

After some playing I came up with workaround and managed to successfully save the adaptor. I don't know if it has any side effects so I would recommend to use it only for saving result of lora training.

First of all, remove original peft and install this version pip install git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2 (even with lots of ram and no cuda errors, I had the adapter file always 433 bytes on latest peft, but this version seems to work fine)

Depending on your installation (just look closely to stack trace of cuda error), you need to find where bitsandbytes library is stored and change bitsandbytes/nn/modules.py

I did a tweak to move tensors to cpu and removed cloning and exception check. I would recommend to rename original function, put this one next to it, run training, and restore original function after the training

def _save_to_state_dict(self, destination, prefix, keep_vars):
        if not self.state.has_fp16_weights and self.state.CB is None and self.state.CxB is not None:
            # reorder weight layout back from ampere/turing to row
            reorder_layout = True
            #weight_clone = self.weight.data.clone()
        else:
            reorder_layout = False

        #try:
        if reorder_layout:
            self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())

        super()._save_to_state_dict(destination, prefix, keep_vars)

        # we only need to save SCB as extra data, because CB for quantized weights is already stored in weight.data
        weight_name = "SCB"

        # case 1: .cuda was called, SCB is in self.weight
        param_from_weight = getattr(self.weight, weight_name)
        # case 2: self.init_8bit_state was called, SCB is in self.state
        param_from_state = getattr(self.state, weight_name)

        key_name = prefix + f"{weight_name}"
        if param_from_weight is not None:
            destination[key_name] = param_from_weight if keep_vars else param_from_weight.detach()
        elif not self.state.has_fp16_weights and param_from_state is not None:
            destination[key_name] = param_from_state if keep_vars else param_from_state.detach()
        #finally:
        #    if reorder_layout:
        #        self.weight.data = weight_clone
angelovAlex commented 1 year ago

Seems that peft step is unnecessary, and 443 bytes file that I had was because of issue in script.

rhulha commented 1 year ago

Note to self: The change is in this line: self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())

rhulha commented 1 year ago

Wow, you are my hero @angelovAlex !
If you want to, you can comment this on this stackoverflow question and I will credit you with the correct answer:

https://stackoverflow.com/questions/76281856/getting-cuda-out-of-memory-when-calling-save-pretrained-in-a-script-that-tries-l

rhulha commented 1 year ago

I will add this patch to the setup_lambdalabs.py

rhulha commented 1 year ago

setup_lambdalabs.py now includes the save_pretrained patch.

rhulha commented 1 year ago

see also this comment: https://www.reddit.com/r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/jmedmxh/

"If you get an out of memory error while saving, that's a bitsandbytes bug that I hope they've fixed but if not you'll need to downgrade to 3.72."