Closed lemon-awa closed 4 months ago
SGlang should take care of downloading the model and tokenizer. Did you try launching the server using SGlang with the following command:
python3 -m sglang.launch_server --model-path AIML-TUDA/LlavaGuard-7B --tokenizer-path llava-hf/llava-1.5-7b-hf --port 10000
Otherwise, you should also be able to clone the two repos and provide the local paths of the model and tokenizer when you launch the server.
Is SGlang the only way to download the checkpoints? Does huggingface not work?
Besides, after downloading the checkpoints of model and tokenizer through SGlang, what code do I need to load this model and tokenizer, since I meet error with both LlavaLlamaForCausalLM.from_pretrained
and AutoModelForCausalLM.from_pretrained
Besides, I meet errors like this when run the code python3 -m sglang.launch_server --model-path AIML-TUDA/LlavaGuard-7B --tokenizer-path llava-hf/llava-1.5-7b-hf --port 10000
There seems to be a problem with the dependencies. You can try out the dockerfile provided by sglang. This should work without any further installations.
I have tried some ways to download LlavaGuard model:
model_path = "AIML-TUDA/LlavaGuard-7B"
andtokenizer_path = "llava-hf/llava-1.5-7b-hf"
, but it met errorsAIML-TUDA/LlavaGuard-7B does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/AIML-TUDA/LlavaGuard-7B/tree/main' for available files.