Closed Xuchen-Li closed 4 months ago
Thanks for your attention!
The tokenizer path corresponds to the path of the LLM, such as Llama2's path. There is no need to modify frames_ops in config.yaml unless you want to use a different processor, such as CLIP's official processor. If you prefer to use CLIPViT or Siglip's official processor, simply set frames_ops to {path/to/clipvit} or {path/to/siglip}.
Thanks for your attention!
The tokenizer path corresponds to the path of the LLM, such as Llama2's path. There is no need to modify frames_ops in config.yaml unless you want to use a different processor, such as CLIP's official processor. If you prefer to use CLIPViT or Siglip's official processor, simply set frames_ops to {path/to/clipvit} or {path/to/siglip}.
Thanks a lot!
Hello, sorry for bothering you again.
I am wondering about the settings for tokenizer and frames_ops in configs/sample_config.yaml for
and
in eval/data/video_llm_data.py line 98 - 103 and line 123 - 128.
How to load video_processor and tokenizer from the pretrained model as setting in configs/sample_config.yaml.
Thanks a lot!