2U1 / Llama3.2-Vision-Finetune

An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.
Apache License 2.0
96 stars 13 forks source link

preprocessor_config.json and tokenizer models are not saving after merging #5

Open NandhaKishorM opened 1 month ago

NandhaKishorM commented 1 month ago

preprocessor_config.json and the tokenizer model files are missing after merging

2U1 commented 1 month ago

Yes, I removed the code for it. You can just load it from the huggingface hub.

NandhaKishorM commented 1 month ago

how to do inference on single image after merging. since both tokenizer and preprocessor_config.json are missing. Code example please

2U1 commented 1 month ago
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"

model = MllamaForConditionalGeneration.from_pretrained(
    "path/to/checkpoint",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)

You could just do like this. If you need to use vllm like things, just download the processor_config and other things.