Closed lucky2046 closed 7 months ago
can you share me your modified requirements.txt?
can you share me your modified requirements.txt?
I did not modify requirements. txt, I modified run_vision_chat.sh for your reference
#! /bin/bash
export SCRIPT_DIR="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
export PROJECT_DIR="$( cd -- "$( dirname -- "$SCRIPT_DIR" )" &> /dev/null && pwd )"
cd $PROJECT_DIR
export PYTHONPATH="$PYTHONPATH:$PROJECT_DIR"
# MODEL_NAME='LWM-Chat-1M-Jax'
# MODEL_NAME='LWM-Chat-128K-Jax'
MODEL_NAME='LWM-Chat-32K-Jax'
export llama_tokenizer_path="/mnt/data/test/LWM/models/${MODEL_NAME}/tokenizer.model"
export vqgan_checkpoint="/mnt/data/t'e's't/LWM/models/${MODEL_NAME}/vqgan"
export lwm_checkpoint="/mnt/data/test/LWM/models/${MODEL_NAME}/params"
export input_file="/mnt/data/test/2020-07-30_pose_test_006.mp4"
python3 -u -m lwm.vision_chat \
--prompt="What is the video about?" \
--input_file="$input_file" \
--vqgan_checkpoint="$vqgan_checkpoint" \
--dtype='fp32' \
--load_llama_config='7b' \
--max_n_frames=8 \
--update_llama_config="dict(sample_mode='text',theta=50000000,max_sequence_length=131072,use_flash_attention=False,scan_attention=False,scan_query_chunk_size=128,scan_key_chunk_size=128,remat_attention='',scan_mlp=False,scan_mlp_chunk_size=2048,remat_mlp='',remat_block='',scan_layers=True)" \
--load_checkpoint="params::$lwm_checkpoint" \
--tokenizer.vocab_file="$llama_tokenizer_path" \
2>&1 | tee ~/output.log
read
I don't think your GPU has enough memory, as by itself a 7B model with fp32
would be 28GB.
bash scripts/run_vision_chat.sh
removed --mesh_dim param model is LWM-Chat-32K-Jax out of memory error, how to solve itmy card is nvidia 2080 super 8G