Open Hojun-Son opened 2 weeks ago
hi @Hojun-Son I just ran the same command and was able to start a server. It may be a latent networking issue with downloading the model.
Also please make sure to specify the model id and other parameters at startup
I'd recommend downloading a model first via text-generation-server download-weights HuggingFaceM4/idefics2-8b
and then running it via text-generation-launcher --model-id HuggingFaceM4/idefics2-8b
I hope these commands work for you
I tried to run text generation inference locally, but the process hangs. What is usually the cause of this problem? For your information, all the args are default.