mlc-ai / mlc-llm

Universal LLM Deployment Engine with ML Compilation
https://llm.mlc.ai/
Apache License 2.0
18.56k stars 1.5k forks source link

Unable to serve Mistral-7B-Instruct-v0.3 #2447

Closed swamysrivathsan closed 3 months ago

swamysrivathsan commented 3 months ago

🐛 Bug

I'm facing issue in serving Mistral-7B-Instruct-v0.3 model via mlc_llm serve. I get the below error when performing model serve with "python -m mlc_llm compile ..." command

INFO engine_base.py:191: If you don't have concurrent requests and only use the engine interactively, please select mode "interactive". thread '' panicked at 'called Result::unwrap() on an Err value: Error("data did not match any variant of untagged enum PreTokenizerWrapper", line: 6952, column: 3)', src/lib.rs:21:50 stack backtrace: 0: rust_begin_unwind 1: core::panicking::panic_fmt 2: core::result::unwrap_failed 3: tokenizers_new_from_str 4: _ZN10tokenizers9Tokenizer12FromBlobJSONERKNSt7cxx1112basic_stringIcSt11char_traitsIcESaIcEEE at /workspace/mlc-llm/3rdparty/tokenizers-cpp/src/huggingface_tokenizer.cc:78:63 5: _ZN3mlc3llm9Tokenizer8FromPathERKN3tvm7runtime6StringESt8optionalINS013TokenizerInfoEE at /workspace/mlc-llm/cpp/tokenizers.cc:117:57 6: operator() at /workspace/mlc-llm/cpp/tokenizers.cc:400:34 7: run<tvm::runtime::TVMMovableArgValueWithContext> at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/packed_func.h:1826:11 8: run<> at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/packed_func.h:1811:60 9: unpack_call<mlc::llm::Tokenizer, 1, mlc::llm::<lambda(const tvm::runtime::String&)> > at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/packed_func.h:1851:46 10: operator() at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/packed_func.h:1911:44 11: Call at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/packed_func.h:1252:58 12: TVMFuncCall 13: _ZL39pyx_f_3tvm_4_ffi_4_cy3_4core_FuncCallPvP7_objectP8TVMValuePi 14: _ZL76pyx_pw_3tvm_4_ffi_4_cy3_4core_10ObjectBase_3init_handle_by_constructor__P7_objectPKS0lS0 15: PyObject_Vectorcall 16: _PyEval_EvalFrameDefault 17: _PyFunction_Vectorcall 18: 19: _PyObject_MakeTpCall 20: _PyEval_EvalFrameDefault 21: _PyFunction_Vectorcall 22: 23: _PyObject_MakeTpCall 24: _PyEval_EvalFrameDefault 25: _PyFunction_Vectorcall 26: PyObject_Call 27: _PyEval_EvalFrameDefault 28: 29: PyEval_EvalCode 30: 31: 32: PyObject_Vectorcall 33: _PyEval_EvalFrameDefault 34: _PyFunction_Vectorcall 35: 36: Py_RunMain 37: Py_BytesMain 38: 39: __libc_start_main 40: _start note: Some details are omitted, run with RUST_BACKTRACE=full for a verbose backtrace. fatal runtime error: failed to initiate panic, error 5

Expected behavior

I was expecting the model serving to be successful as I didnt face any issues in convert_weights, gen_config and compile stages

Environment

Additional context

Below is the version of mlc packages currently installed.

mlc-ai-nightly-cu122 0.15.dev380 mlc-llm-nightly-cu122 0.1.dev1313

MasterJH5574 commented 3 months ago

Hi @swamysrivathsan, we fixed the issue in https://github.com/mlc-ai/mlc-llm/pull/2490. The latest nightly build will run tonight, so could you try upgrade the mlc pip package tomorrow and try again?

swamysrivathsan commented 3 months ago

Hi @swamysrivathsan, we fixed the issue in #2490. The latest nightly build will run tonight, so could you try upgrade the mlc pip package tomorrow and try again?

Hello @MasterJH5574 - Thanks for the update. I have upgraded the mlc pip package and below is the current version I have.

mlc-ai-nightly-cu122 0.15.dev404 mlc-llm-nightly-cu122 0.1.dev1347

I see an issue with response generation as I get token limit issue.

I tried converting the model weights myself and also tried using the weights from "mlc-ai/Mistral-7B-Instruct-v0.3-q4f16_1-MLC". Still the same issue.

openai.BadRequestError: Error code: 400 - {"object":"error","message":"Request prompt has 2354 tokens in total, larger than the model input length limit -80.","code":400}

Below is the complete traceback

ERROR: Exception in ASGI application

ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/test/env/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 412, in run_asgi result = await app( # type: ignore[func-returns-value] File "/home/test/env/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call return await self.app(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/home/test/env/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/home/test/env/lib/python3.10/site-packages/starlette/middleware/cors.py", line 91, in call await self.simple_response(scope, receive, send, request_headers=headers) File "/home/test/env/lib/python3.10/site-packages/starlette/middleware/cors.py", line 146, in simple_response await self.app(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/home/test/env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/home/test/env/lib/python3.10/site-packages/starlette/routing.py", line 758, in call await self.middleware_stack(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/home/test/env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/home/test/env/lib/python3.10/site-packages/starlette/routing.py", line 77, in app await response(scope, receive, send) File "/home/test/env/lib/python3.10/site-packages/starlette/responses.py", line 257, in call async with anyio.create_task_group() as task_group: File "/home/test/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit raise exceptions[0] File "/home/test/env/lib/python3.10/site-packages/starlette/responses.py", line 260, in wrap await func() File "/home/test/env/lib/python3.10/site-packages/starlette/responses.py", line 249, in stream_response async for chunk in self.body_iterator: File "/home/test/baxi-backend-docqa/services/openai_services.py", line 74, in get_response stream = await self.client.chat.completions.create( File "/home/test/env/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1199, in create return await self._post( File "/home/test/env/lib/python3.10/site-packages/openai/_base_client.py", line 1474, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/home/test/env/lib/python3.10/site-packages/openai/_base_client.py", line 1275, in request return await self._request( File "/home/test/env/lib/python3.10/site-packages/openai/_base_client.py", line 1318, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {"object":"error","message":"Request prompt has 2354 tokens in total, larger than the model input length limit -80.","code":400}

MasterJH5574 commented 3 months ago

Thank you so much for catching this bug. I just triaged a bit and fixed it in #2500. Could you upgrade the pip wheel once more tomorrow and try again? Sorry for the inconvenience and thanks again for catching this bug.

swamysrivathsan commented 3 months ago

@MasterJH5574 - Thanks for your support. The latest fix resolved my issue!

prabhatkgupta commented 2 months ago

I was also facing the same issue, but upgrading to latest mlc_llm packages solved it

Thanks @MasterJH5574 for adding support for Mistral instruct v3 models in such a short period of time.