Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.68k stars 170 forks source link

How to run `single_turn.py` without distributed mode? #80

Closed EricWiener closed 10 months ago

EricWiener commented 11 months ago

I tried running with:

export OUTPUT_DIR=./output/
python demos/single_turn.py \
--llama_config $OUTPUT_DIR/config/13B_params.json --tokenizer_path $OUTPUT_DIR/config/tokenizer.model \
--pretrained_path $OUTPUT_DIR/finetune/mm/alpacaLlava_llamaQformerv2_13b/consolidated.00-of-02.model.pth

and I get the error:

Not using distributed mode
Traceback (most recent call last):
  File "/home/ddlabs/repositories/LLaMA2-Accessory/accessory/demos/single_turn.py", line 52, in <module>
    fs_init.initialize_model_parallel(args.model_parallel_size)
  File "/home/ddlabs/miniconda3/envs/accessory/lib/python3.10/site-packages/fairscale/nn/model_parallel/initialize.py", line 68, in initialize_model_parallel
    assert torch.distributed.is_initialized()
AssertionError

Commenting out fs_init.initialize_model_parallel(args.model_parallel_size) results in a failure later on.

Is there a working demo for a multi-modal model on a single computer?

Enderfga commented 11 months ago

Thank you for reaching out and sharing the details of the issue you are facing. First and foremost, it's important to note that there are two weight files for the 13B model. Besides the one you mentioned, you also need consolidated.01-of-02.model.pth. The pretrained_path should point to the directory containing both .pth files.

Our demos are designed to work on a single computer. However, under the default settings, the 7B model requires 1 GPU, the 13B model requires 2 GPUs, and so forth. To facilitate a smooth setup, I would recommend you to utilize bash demos/start.sh as it provides detailed guidance on configuring each step. This should hopefully address the issues you are encountering and enable you to run the multi-modal model on your machine successfully.

Should you require further assistance or have any additional questions, feel free to contact us. We're here to help!