bentoml / OpenLLM

Run any open-source LLMs, such as Llama, Gemma, as OpenAI compatible API endpoint in the cloud.
https://bentoml.com
Apache License 2.0
10.05k stars 636 forks source link

bug: load a model from local #86

Closed skywolf123 closed 1 year ago

skywolf123 commented 1 year ago

Describe the bug

when I load my local model

openllm start chatglm --model-id /chatglm-6b

I get a error

openllm.exceptions.OpenLLMException: Model type <class 'transformers_modules.chatglm-6b.configuration_chatglm.ChatGLMConfig'> is not supported yet.

How can I do?

To reproduce

No response

Logs

No response

Environment

cli

System information (Optional)

No response

aarnphm commented 1 year ago

see #87 for fixes

aarnphm commented 1 year ago

Will release a patch soon

skywolf123 commented 1 year ago

update to v0.1.19, but same error

openllm.exceptions.OpenLLMException: Model type <class 'transformers_modules.chatglm-6b-int4.configuration_chatglm.ChatGLMConfig'> is not supported yet.

aarnphm commented 1 year ago

Hey there, we discussed internally about more extensive custom path support, and want to share the decision: With custom model path, it is best that when you do openllm start opt --model-id /path/to/custom-path, OpenLLM will first copy this to local BentoML store, and will serve from there. This is to decouple a lot of loading logics within OpenLLM for custom path and pretrained openllm start under the hood does two things it it detects custom path:

openllm build will behave the same.

To ensure hermeticity, openllm import can provide optional --model-version to make sure we don’t copy the same path multiple times. If not passed, then we will generate the name based on the path (get the base name of the path) and the version would be a hash of the last modified time

foxxxx001 commented 1 year ago

“openllm import ”?I can't see this option.

aarnphm commented 1 year ago

WIP on https://github.com/bentoml/OpenLLM/pull/102

aarnphm commented 1 year ago

Please try out 0.1.20

skywolf123 commented 1 year ago

Please try out 0.1.20

Traceback (most recent call last): File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 109, in from_str return cls(name, version) ^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 63, in init validate_tag_str(lversion) File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 40, in validate_tagstr raise ValueError( ValueError: \chatglm2-6b-int4 is not a valid BentoML tag: a tag's name or version must consist of alphanumeric characters, '', '-', or '.', and must start and end with an alphanumeric character

aarnphm commented 1 year ago

can you send the full traceback here?

skywolf123 commented 1 year ago

can you send the full traceback here?

openllm import chatglm D:\chatglm-6b-int4

Converting 'D' to lowercase: 'd'. Traceback (most recent call last): File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 109, in from_str return cls(name, version) ^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 63, in init validate_tag_str(lversion) File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 40, in validate_tagstr raise ValueError( ValueError: \chatglm-6b-int4 is not a valid BentoML tag: a tag's name or version must consist of alphanumeric characters, '', '-', or '.', and must start and end with an alphanumeric character

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "D:\Anaconda3\envs\llm_env\Scripts\openllm.exe__main.py", line 7, in File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1130, in call return self.main(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1055, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 760, in invoke return __callback(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 385, in wrapper return func(*args, *attrs) ^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 358, in wrapper return_value = func(args, attrs) ^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 333, in wrapper return f(*args, attrs) ^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 1180, in download_models ).for_model( ^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\models\auto\factory.py", line 129, in for_model llm = model_class.from_pretrained(model_id, model_version=model_version, llm_config=llm_config, attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm_llm.py", line 648, in from_pretrained _tag = bentoml.Tag.from_taglike(model_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 96, in from_taglike return cls.from_str(taglike) ^^^^^^^^^^^^^^^^^^^^^ File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 111, in from_str raise BentoMLException(f"Invalid {cls.name__} {tag_str}") bentoml.exceptions.BentoMLException: Invalid Tag D:\chatglm-6b-int4

lixiaoxiangzhi commented 1 year ago

I also encountered this issue.

Mercatoro commented 11 months ago

Hey there, we discussed internally about more extensive custom path support, and want to share the decision: With custom model path, it is best that when you do openllm start opt --model-id /path/to/custom-path, OpenLLM will first copy this to local BentoML store, and will serve from there. This is to decouple a lot of loading logics within OpenLLM for custom path and pretrained openllm start under the hood does two things it it detects custom path:

  • openllm import opt /path/to/custom-path -> this will add the custom path to the BentoML store
  • openllm start run the server (Note that this is already the case with pretrained model)

openllm build will behave the same.

To ensure hermeticity, openllm import can provide optional --model-version to make sure we don’t copy the same path multiple times. If not passed, then we will generate the name based on the path (get the base name of the path) and the version would be a hash of the last modified time

Hey, loading the model from a local folder should also be possible from a docker container, correct? I have the following as my last command in the Dockerfile: CMD ["openllm", "start" , "bigcode/starcoder", "--model-id", "/path/to/local/starcoder/model"] Of course the path is mounted when running the container.

lixiaoxiangzhi commented 11 months ago

您的来信已收到,祝您每天有个好心情。

Mercatoro commented 11 months ago

Hello again, @aarnphm your described way to use a local model instead of downloading it every time seems not to work at the moment. Here is my Dockerfile plus the run command. It's starting up fine w/o any errors but it is downloading the model completely from the internet anyways. Do you need a new ticket / further information?

Dockerfile used to build image:

FROM python:3.10-slim
WORKDIR /code
ENV BENTOML_HOME="/root/srv/user/starcoder/"
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
RUN --mount=type=secret,id=huggingfacetoken \
    huggingface-cli login --token $(cat /run/secrets/huggingfacetoken)
EXPOSE 3000
COPY . .
CMD ["openllm", "start" , "bigcode/starcoder", "--workers-per-resource", "0.5", "--model-id", "/root/srv/user/starcoder/starcoder"]

requirements.txt

huggingface_hub[cli]
bentoml
psutil
wheel
vllm==0.2.2
torch
transformers
openllm

docker run command to start from image: nvidia-docker run --mount type=bind,source=/srv/user/starcoder/starcoder,target=/srv/user/starcoder/models/vllm-bigcode--starcoder/<hash> --gpus all -d -p 3005:3000 <starcoder_image>

aarnphm commented 11 months ago

You only need to run CMD ["openllm", "start", "/mount/path", ...]