when running cmd to load chatglm or chatglm2, cmd as below:
openllm start chatglm --model-id thudm/chatglm2-6b
openllm start chatglm --model-id thudm/chatglm-6b
Error: No such option: --model-id
and run openllm start chatglm -h to check
it shows
`Usage: openllm start chatglm [OPTIONS]
chatglm is currently not available to run on your local machine because it requires GPU for inference.
Options:
Miscellaneous options:
-q, --quiet Suppress all output.
--debug, --verbose Print out debug logs.
--do-not-track Do not send usage info
-h, --help Show this message and exit.
`
Describe the bug
when running cmd to load chatglm or chatglm2, cmd as below: openllm start chatglm --model-id thudm/chatglm2-6b openllm start chatglm --model-id thudm/chatglm-6b
Error: No such option: --model-id and run openllm start chatglm -h to check it shows `Usage: openllm start chatglm [OPTIONS]
chatglm is currently not available to run on your local machine because it requires GPU for inference.
Options: Miscellaneous options: -q, --quiet Suppress all output. --debug, --verbose Print out debug logs. --do-not-track Do not send usage info -h, --help Show this message and exit. `
To reproduce
No response
Logs
No response
Environment
m1max openllm 0.1.20.dev python 3.9.7
System information (Optional)
No response