Open NandhaKishorM opened 1 month ago
Yes, I removed the code for it. You can just load it from the huggingface hub.
how to do inference on single image after merging. since both tokenizer and preprocessor_config.json are missing. Code example please
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(
"path/to/checkpoint",
torch_dtype=torch.bfloat16,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
You could just do like this. If you need to use vllm like things, just download the processor_config and other things.
preprocessor_config.json and the tokenizer model files are missing after merging