BAAI-DCAI / M3D

M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models
MIT License
208 stars 11 forks source link

Problems about evaluation #6

Closed shiym2000 closed 3 weeks ago

shiym2000 commented 5 months ago

When I ran the script Bench/eval/eval_caption.py, I faced a problem like that. How can I fix it?

Traceback (most recent call last):
  File "/mnt/nvme_share/shiym/projects_3rd/M3D/Bench/eval/eval_caption.py", line 10, in <module>
    from Bench.dataset.multi_dataset import CapDataset
ModuleNotFoundError: No module named 'Bench'
baifanxxx commented 5 months ago

Hi,

The 'Bench' is really in the code, which you can see. I guess you did not set the Python environment correctly. You can try this:

export PYTHONPATH=$PYTHONPATH:/PATH/M3D
shiym2000 commented 4 months ago

Thank you. When I evaluated the model after following REAME.md to pretrain, finetune and merge, I faced a problem like this.

Could not locate the configuration_phi3.py inside ./LaMed/output/LaMed-finetune-Phi-3-mini-128k-instruct/hf.
Traceback (most recent call last):
  File "/mnt/nvme_share/shiym/projects_3rd/M3D/Bench/eval/eval_caption.py", line 127, in <module>
    main()
  File "/mnt/nvme_share/shiym/projects_3rd/M3D/Bench/eval/eval_caption.py", line 72, in main
    model = AutoModelForCausalLM.from_pretrained(
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 524, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 968, in from_pretrained
    config_class = get_class_from_dynamic_module(
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 502, in get_class_from_dynamic_module
    final_module = get_cached_module_file(
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 306, in get_cached_module_file
    resolved_module_file = cached_file(
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/utils/hub.py", line 373, in cached_file
    raise EnvironmentError(
OSError: ./LaMed/output/LaMed-finetune-Phi-3-mini-128k-instruct/hf does not appear to have a file named configuration_phi3.py. Checkout 'https://huggingface.co/./LaMed/output/LaMed-finetune-Phi-3-mini-128k-instruct/hf/tree/None' for available files.

When I merged the model, I wrote a script based on merge_lora_weights_and_save_hf_model.py like this.

CUDA_VISIBLE_DEVICES=0 python LaMed/src/utils/merge_lora_weights_and_save_hf_model.py \
    --model_name_or_path /mnt/nfs_share/shiym/ckpts/m3d/Phi-3-mini-128k-instruct \
    --model_type phi3 \
    --model_with_lora ./LaMed/output/LaMed-finetune-Phi-3-mini-128k-instruct/model_with_lora.bin \
    --output_dir ./LaMed/output/LaMed-finetune-Phi-3-mini-128k-instruct/hf

Actually, there is not configuration_phi3.py in ./LaMed/output/LaMed-finetune-Phi-3-mini-128k-instruct/hf/.

baifanxxx commented 4 months ago

Hi,

If you use the locally trained model instead of my hf model, please remove the following in config.json.

"auto_map": {
    "AutoConfig": "configuration_m3d_lamed.LamedPhi3Config",
    "AutoModelForCausalLM": "modeling_m3d_lamed.LamedPhi3ForCausalLM"
 },

After that, I think everything will be ok.

shiym2000 commented 4 months ago

Thank you for your help. It works. However, when I merged LaMed-finetune-Llama-2-7b-chat-hf, I faced this problem.

Traceback (most recent call last):
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 733, in save_pretrained
    raise ValueError(str([w.message for w in caught_warnings]))
ValueError: [UserWarning('`do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.'), UserWarning('`do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.')]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/nvme_share/shiym/projects_3rd/M3D/LaMed/src/utils/merge_lora_weights_and_save_hf_model.py", line 178, in <module>
    main()
  File "/mnt/nvme_share/shiym/projects_3rd/M3D/LaMed/src/utils/merge_lora_weights_and_save_hf_model.py", line 171, in main
    model.save_pretrained(training_args.output_dir)
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2487, in save_pretrained
    model_to_save.generation_config.save_pretrained(save_directory)
  File "/home/shiym/anaconda3/envs/m3d/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 735, in save_pretrained
    raise ValueError(
ValueError: The generation config instance is invalid -- `.validate()` throws warnings and/or exceptions. Fix these issues to save the configuration.

Thrown during validation:
[UserWarning('`do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.'), UserWarning('`do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.')]

I used the script like this.

CUDA_VISIBLE_DEVICES=5 python LaMed/src/utils/merge_lora_weights_and_save_hf_model.py \
    --model_name_or_path /mnt/nfs_share/shiym/ckpts/m3d/Llama-2-7b-chat-hf \
    --model_type llama2 \
    --model_with_lora ./LaMed/output/LaMed-finetune-Llama-2-7b-chat-hf/model_with_lora.bin \
    --output_dir ./LaMed/output/LaMed-finetune-Llama-2-7b-chat-hf/hf
baifanxxx commented 1 month ago

Sorry for the late reply, if you have solved this problem, please close the issue.