EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.03k stars 54 forks source link

How to debug in vscode #22

Closed xmu-xiaoma666 closed 3 months ago

xmu-xiaoma666 commented 3 months ago
accelerate launch --num_processes=8 -m lmms_eval --model llava   --model_args pretrained="liuhaotian/llava-v1.5-7b"   --tasks mme,mmbench_en --batch_size 1 --log_samples --log_samples_suffix llava_v1.5_mme_mmbenchen --output_path ./logs/ #

Since "lmms_eval" is a folder in your script, I would like to know how to debug in vscode. What should the launch.json of vscode be like?

Luodian commented 3 months ago

It will be like the following

{
    "name": "LMMs-Eval-ICONQA",
    "type": "python",
    "request": "launch",
    "module": "accelerate.commands.launch",
    "python": "/home/tiger/miniconda3/envs/lmms-eval/bin/python",
    "env": {
        "CUDA_VISIBLE_DEVICES": "0,1,2,3,4,5,6,7",
        "PYTHONWARNINGS": "ignore",
        "TOKENIZERS_PARALLELISM": "false",
        "ACCELERATE_DEBUG_MODE": "1"
    },
    "cwd": "${workspaceFolder}",
    "justMyCode": false,
    "args": [
        "--main_process_port=12566",
        "--num_processes=1",
        "lmms_eval",
        "--model=llava",
        "--model_args=pretrained=/home/tiger/checkpoints/llava-v1.5-7b",
        "--tasks=iconqa_val",
        "--batch_size=1",
        "--limit=8",
        "--log_samples",
        "--log_samples_suffix=debug",
        "--output_path=./logs/",
        "--verbosity=DEBUG"
    ],
    "console": "integratedTerminal"
},