THUDM / CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Apache License 2.0
1.42k stars 77 forks source link

error: ‘for’ loop initial declarations are only allowed in C99 mode #86

Closed rossipang closed 2 weeks ago

rossipang commented 3 weeks ago

System Info / 系統信息

centos 7 4张3090 cuda 12.4.1

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

Reproduction / 复现过程

运行openai_api_request.py报错

Expected behavior / 期待表现

2024-06-06 15:43:43.031 | DEBUG | main:generate_stream_cogvlm:329 - ==== request ==== What’s in this image? /tmp/tmp1qwuuf1v/main.c: In function ‘list_to_cuuint64_array’: /tmp/tmp1qwuuf1v/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ /tmp/tmp1qwuuf1v/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code /tmp/tmp1qwuuf1v/main.c: In function ‘list_to_cuuint32_array’: /tmp/tmp1qwuuf1v/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode for (Py_ssize_t i = 0; i < len; i++) { ^ INFO: 113.209.130.164:3215 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in call await self.app(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/routing.py", line 756, in call await self.middleware_stack(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/routing.py", line 776, in app await route.handle(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle await self.app(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/routing.py", line 77, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/starlette/routing.py", line 72, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/project/CogVLM2/basic_demo/openai_api_demo_multi_gpus.py", line 188, in create_chat_completion response = generate_cogvlm(model, tokenizer, gen_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/project/CogVLM2/basic_demo/openai_api_demo_multi_gpus.py", line 256, in generate_cogvlm for response in generate_stream_cogvlm(model, tokenizer, params): File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^ File "/web/project/CogVLM2/basic_demo/openai_api_demo_multi_gpus.py", line 376, in generate_stream_cogvlm model.generate(inputs, gen_kwargs) File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/transformers/generation/utils.py", line 1758, in generate result = self._sample( ^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/transformers/generation/utils.py", line 2397, in _sample outputs = self( ^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 620, in forward outputs = self.model( ^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 402, in forward return self.llm_forward( ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 486, in llm_forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 261, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( ^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 204, in forward query_states, key_states = self.rotary_emb(query_states, key_states, position_ids=position_ids, max_seqlen=position_ids.max() + 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/util.py", line 469, in forward q = apply_rotary_emb_func( ^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/util.py", line 329, in apply_rotary_emb return ApplyRotaryEmb.apply( ^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/torch/autograd/function.py", line 598, in apply return super().apply(*args, *kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/util.py", line 255, in forward out = apply_rotary( ^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/util.py", line 212, in apply_rotary rotary_kernel[grid]( File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/jit.py", line 167, in return lambda args, kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/jit.py", line 363, in run device = driver.get_current_device() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/driver.py", line 209, in getattr self._initialize_obj() File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/driver.py", line 206, in _initialize_obj self._obj = self._init_fn() ^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/driver.py", line 239, in initialize_driver return CudaDriver() ^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/driver.py", line 102, in init self.utils = CudaUtils() ^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/runtime/driver.py", line 49, in init so = _build("cuda_utils", src_path, tmpdir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/common/build.py", line 106, in _build ret = subprocess.check_call(cc_cmd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/web/miniforge3/envs/cog/lib/python3.11/subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp1qwuuf1v/main.c', '-O3', '-I/web/miniforge3/envs/cog/lib/python3.11/site-packages/triton/common/../third_party/cuda/include', '-I/web/miniforge3/envs/cog/include/python3.11', '-I/tmp/tmp1qwuuf1v', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp1qwuuf1v/cuda_utils.cpython-311-x86_64-linux-gnu.so', '-L/lib64', '-L/lib', '-L/lib64', '-L/lib']' returned non-zero exit status 1.

zRzRzRzRzRzRzR commented 3 weeks ago

这个应该是你依赖装错吧,看到报错应该是cuda的问题

xiehengye commented 1 week ago

这个问题怎么解决的?

xiehengye commented 1 week ago

是因为安装 bitsandbytes-0.43.1失败吗? 我这边安装报错ERROR: bitsandbytes-0.43.1-py3-none-manylinux_2_24_x86_64.whl is not a supported wheel on this platform.