2U1 / Phi3-Vision-Finetune

An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.
Apache License 2.0
67 stars 9 forks source link

imports issues #2

Closed sylvain471 closed 4 months ago

sylvain471 commented 4 months ago

Hello,

many thanks for this very nice piece of work!

I couldn't get the finetune/finetune_lora scripts to run on a freshly launched ubuntu ec2 instance without a substantial refactoring of the files and imports:

from trainer import Phi3VTrainer from data import make_supervised_data_module from params import DataArguments, ModelArguments, TrainingArguments from train_utils...

- removed the dot from the imports when it appeared in the scripts

I wonder why the original import structure fails with my setting... 

-  I also found it easier to use a minimal requirements.txt using [UV](https://github.com/astral-sh/uv) environment tool

transformers torch peft pillow requests ipykernel accelerate torchvision ujson deepspeed setuptools wandb

uv venv source .venv/bin/activate uv pip install -r requirements.txt uv pip install flash-attn

sylvain471 commented 4 months ago

The trick above solves the problem of the finetuning script, but I now get troubles with the adapter merging script

python src/merge_lora_weights.py \
    --model-path Phi-3-vision-128k-instruct/ \
    --model-base microsoft/Phi-3-vision-128k-instruct \
    --save-model-path output/testing/checkpoint-39 \
    --safe-serialization

modifying the merge_lora_weights.py imports to meet the modified structure (see above)

#from training.phi3_vision import Phi3VForCausalLM, Phi3VConfig, Phi3VProcessor
from training.configuration_phi3_v import Phi3VConfig
from training.modeling_phi3_v import Phi3VForCausalLM
from training.processing_phi3_v import Phi3VProcessor

and putting back the . for the local imports where needed gets things started but I end up with a

AttributeError: module transformers has no attribute Phi3VImageProcessor

which I can't solve for the moment

mirunabetianu commented 4 months ago

Hello! I encountered the same import issues when I tried running the finetuning script. However, I found an easier fix to this by appending the path of the src directory to the python path. Basically, python couldn't locate the modules in the src directory, since they were not in the python path by default.

import sys
import os

sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

I added this to both utils.py and train.py.

ericnesschrw commented 4 months ago

It's also possible to set the PYTHONPATH in an environment variable like:

export PYTHONPATH=/path/to/repo/src
sylvain471 commented 4 months ago

Hello! I encountered the same import issues when I tried running the finetuning script. However, I found an easier fix to this by appending the path of the src directory to the python path. Basically, python couldn't locate the modules in the src directory, since they were not in the python path by default.

import sys
import os

sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

I added this to both utils.py and train.py.

thanks, it works fine!

I now need to figure out how to properly use the merge_lora_weights.py

sylvain471 commented 4 months ago

It's also possible to set the PYTHONPATH in an environment variable like:

export PYTHONPATH=/path/to/repo/src

thanks, I'll give it a try!

2U1 commented 4 months ago

@sylvain471 @ericnesschrw @mirunabetianu Thanks for letting me know about the issue. I haven't encountered this issue when testing with 2 machines locally, so I haven't noticed it. I've added script in bash files in script to run the scripts! Also I've just added merge_lora.sh to merge the weights using it more easily without seeing this issue.

Sorry for the inconvinient. Next time I'll just use setup_tools for other projects.

2U1 commented 4 months ago

@sylvain471

Let me know if it still happens. You could fix the error with adding

export PYTHONPATH=src:$PYTHONPATH at the top of the finetune_lora.sh or finetune.sh

Also,

#!/bin/bash

export PYTHONPATH=src:$PYTHONPATH

python src/merge_lora_weights.py \
    --model-path /path/to/model \
    --model-base microsoft/Phi-3-vision-128k-instruct  \
    --save-model-path /path/to/save \
    --safe-serialization

Also you can merge the lora_weights using this script. Also I fixed a typo that causes error in parsing arguments in merge_lora_weights.py.

Again, I'm sorry for the inconvenience.

sylvain471 commented 4 months ago

Hello @2U1,

so far so good with your updated finetune_lora.sh script, the export works fine.

I am still having issues with the adapter/merged model, despite evidence that the model learns the inference shows rather poor results if any. I need to investigate, I might write a notebook to have a more interactive view of things.

Thanks again for your work!

2U1 commented 4 months ago

@sylvain471 I have no idea about that. Mine works just fine. If you find the cause, please let me know.

sylvain471 commented 4 months ago

@2U1 I managed to load and merge properly without safe-serialization which still causes troubles. This might come from the accelerate package, your requirements.txt specifies 0.29.2 while I am running 0.32.0, or it might just be me doing silly things again.

Anyway the merged model now answers as taught, thanks for your support!