BerriAI / litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.17k stars 1.41k forks source link

[Bug]: 400, bad request when submitting files #4377

Closed flefevre closed 2 months ago

flefevre commented 2 months ago

What happened?

When i use open-webui to upload a file in order to summarize it with a model that is managed by Litellm, i got External: 400, message='Bad Request', url=URL('http://litellm:8000/v1/chat/completions') If I use openwebui without uplaoding a file, just to make basic inference with a model managed by litellm: it works. If I use the model store in vllm directly without Litellm it works.

Relevant log output

`litellm  | 10:38:24 - LiteLLM Router:INFO: router.py:659 - litellm.acompletion(model=openai/google/gemma-1.1-2b-it) Exception OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}`

Twitter / LinkedIn details

No response

flefevre commented 2 months ago

here the full log trace

itellm | Request to litellm: litellm | 10:38:24 - LiteLLM:INFO: utils.py:1298 - litellm | litellm | POST Request Sent from LiteLLM: litellm | curl -X POST \ litellm | http://vllm-gemma-2b:5011/v1 \ litellm | -d '{'model': 'google/gemma-1.1-2b-it', 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAAAAAA n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}], 'temperature': 0.1, 'stream': True, 'extra_body': {}}' litellm | litellm | litellm | litellm.acompletion(api_key='yourapikey', api_base='http://vllm-gemma-2b:5011/v1', model='openai/google/gemma-1.1-2b-it', temperature=0.1, stream=True, messages=[{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAA n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}], caching=False, client=<openai.AsyncOpenAI object at 0x73443102b490>, timeout=6000, proxy_server_request={'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}]}}, metadata={'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None, 'previous_models': [{'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: résume le document suivant"}, {'role': 'user', 'content': 'résume le document suivant'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b1d72ded-8070-438a-aaac-2ed2f37d3992', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734430f65110>, 'model': 'gemma-2b', 'stream': True}, {'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: résume le document suivant"}, {'role': 'user', 'content': 'résume le document suivant'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b1d72ded-8070-438a-aaac-2ed2f37d3992', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734430f65110>, 'model': 'gemma-2b', 'stream': True}, {'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b6be49f7-1271-4e07-a7c0-67887af39282', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734431425750>, 'model': 'gemma-2b', 'stream': True}, {'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b6be49f7-1271-4e07-a7c0-67887af39282', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734431425750>, 'model': 'gemma-2b', 'stream': True}]}, request_timeout=600, litellm_call_id='b6be49f7-1271-4e07-a7c0-67887af39282', litellm_logging_obj=<litellm.utils.Logging object at 0x734431425750>, model_info={'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, max_retries=0) litellm | litellm | litellm | ASYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache'): None litellm | Final returned optional params: {'temperature': 0.1, 'stream': True, 'max_retries': 0, 'extra_body': {}} litellm | self.optional_params: {'temperature': 0.1, 'stream': True, 'max_retries': 0, 'extra_body': {}} litellm | RAW RESPONSE: litellm | <coroutine object OpenAIChatCompletion.async_streaming at 0x7344302d9460> litellm | litellm | litellm | Logging Details: logger_fn - None | callable(logger_fn) - False litellm | Logging Details LiteLLM-Failure Call: ['langfuse', <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x7344313b5790>>, <litellm.proxy.hooks.parallel_request_limiter._PROXY_MaxParallelRequestsHandler object at 0x73443136f690>, <litellm.proxy.hooks.tpm_rpm_limiter._PROXY_MaxTPMRPMLimiter object at 0x73443136f6d0>, <litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x7344367ed290>, <litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x734433511b10>, <litellm._service_logger.ServiceLogging object at 0x734435d96790>] litellm | 10:38:24 - LiteLLM:INFO: langfuse.py:182 - Langfuse Layer Logging - logging success litellm | 10:38:24 - LiteLLM Router:INFO: router.py:659 - litellm.acompletion(model=openai/google/gemma-1.1-2b-it) Exception OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 723, in async_streaming litellm | response = await openai_aclient.chat.completions.create( litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create litellm | return await self._post( litellm | ^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post litellm | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request litellm | return await self._request( litellm | ^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request litellm | raise self._make_status_error_from_response(err.response) from None litellm | openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | litellm | During handling of the above exception, another exception occurred: litellm | litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 340, in acompletion litellm | response = await init_response litellm | ^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 746, in async_streaming litellm | raise OpenAIError(status_code=e.status_code, message=str(e)) litellm | litellm.llms.openai.OpenAIError: Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | litellm | During handling of the above exception, another exception occurred: litellm | litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1525, in async_function_with_fallbacks litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1511, in async_function_with_fallbacks litellm | response = await self.async_function_with_retries(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1701, in async_function_with_retries litellm | raise original_exception litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1626, in async_function_with_retries litellm | response = await original_function(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 664, in _acompletion litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 648, in _acompletion litellm | response = await _response litellm | ^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3898, in wrapper_async litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3726, in wrapper_async litellm | result = await original_function(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 361, in acompletion litellm | raise exception_type( litellm | ^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 9882, in exception_type litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 8633, in exception_type litellm | raise APIError( litellm | litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 723, in async_streaming litellm | response = await openai_aclient.chat.completions.create( litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create litellm | return await self._post( litellm | ^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post litellm | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request litellm | return await self._request( litellm | ^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request litellm | raise self._make_status_error_from_response(err.response) from None litellm | openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | litellm | During handling of the above exception, another exception occurred: litellm | litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 340, in acompletion litellm | response = await init_response litellm | ^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 746, in async_streaming litellm | raise OpenAIError(status_code=e.status_code, message=str(e)) litellm | litellm.llms.openai.OpenAIError: Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | litellm | During handling of the above exception, another exception occurred: litellm | litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/proxy/proxy_server.py", line 3850, in chat_completion litellm | responses = await llm_responses litellm | ^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 545, in acompletion litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 541, in acompletion litellm | response = await self.async_function_with_fallbacks(**kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1607, in async_function_with_fallbacks litellm | raise original_exception litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1525, in async_function_with_fallbacks litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1511, in async_function_with_fallbacks litellm | response = await self.async_function_with_retries(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1701, in async_function_with_retries litellm | raise original_exception litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1626, in async_function_with_retries litellm | response = await original_function(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 664, in _acompletion litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 648, in _acompletion litellm | response = await _response litellm | ^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3898, in wrapper_async litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3726, in wrapper_async litellm | result = await original_function(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 361, in acompletion litellm | raise exception_type( litellm | ^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 9882, in exception_type litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 8633, in exception_type litellm | raise APIError( litellm | litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 723, in async_streaming litellm | response = await openai_aclient.chat.completions.create( litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create litellm | return await self._post( litellm | ^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post litellm | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request litellm | return await self._request( litellm | ^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request litellm | raise self._make_status_error_from_response(err.response) from None litellm | openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | litellm | During handling of the above exception, another exception occurred: litellm | litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 340, in acompletion litellm | response = await init_response litellm | ^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 746, in async_streaming litellm | raise OpenAIError(status_code=e.status_code, message=str(e)) litellm | litellm.llms.openai.OpenAIError: Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | litellm | During handling of the above exception, another exception occurred: litellm | litellm | Traceback (most recent call last): litellm | File "/usr/local/lib/python3.11/site-packages/litellm/proxy/proxy_server.py", line 3850, in chat_completion litellm | responses = await llm_responses litellm | ^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 545, in acompletion litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 541, in acompletion litellm | response = await self.async_function_with_fallbacks(**kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1607, in async_function_with_fallbacks litellm | raise original_exception litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1525, in async_function_with_fallbacks litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1511, in async_function_with_fallbacks litellm | response = await self.async_function_with_retries(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1701, in async_function_with_retries litellm | raise original_exception litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 1626, in async_function_with_retries litellm | response = await original_function(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 664, in _acompletion litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/router.py", line 648, in _acompletion litellm | response = await _response litellm | ^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3898, in wrapper_async litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3726, in wrapper_async litellm | result = await original_function(*args, **kwargs) litellm | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 361, in acompletion litellm | raise exception_type( litellm | ^^^^^^^^^^^^^^^ litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 9882, in exception_type litellm | raise e litellm | File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 8633, in exception_type litellm | raise APIError( litellm | litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400} litellm | Giving up chat_completion(...) after 1 tries (litellm.proxy.proxy_server.ProxyException) litellm | Giving up chat_completion(...) after 1 tries (litellm.proxy.proxy_server.ProxyException) litellm | Langfuse Logging - Enters logging function for model {'model': 'google/gemma-1.1-2b-it', 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}], 'optional_params': {'temperature': 0.1, 'stream': True, 'max_retries': 0, 'extra_body': {}}, 'litellm_params': {'acompletion': True, 'api_key': 'yourapikey', 'force_timeout': 600, 'logger_fn': None, 'verbose': False, 'custom_llm_provider': 'openai', 'api_base': 'http://vllm-gemma-2b:5011/v1', 'litellm_call_id': 'b6be49f7-1271-4e07-a7c0-67887af39282', 'model_alias_map': {}, 'completion_call_id': None, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None, 'previous_models': [{'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\nAAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: résume le document suivant"}, {'role': 'user', 'content': 'résume le document suivant'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b1d72ded-8070-438a-aaac-2ed2f37d3992', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734430f65110>, 'model': 'gemma-2b', 'stream': True}, {'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\nAAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: résume le document suivant"}, {'role': 'user', 'content': 'résume le document suivant'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '7440'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b1d72ded-8070-438a-aaac-2ed2f37d3992', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734430f65110>, 'model': 'gemma-2b', 'stream': True}, {'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\nAAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b6be49f7-1271-4e07-a7c0-67887af39282', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734431425750>, 'model': 'gemma-2b', 'stream': True}, {'exception_type': 'APIError', 'exception_string': "OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}", 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}]}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'request_timeout': 600, 'litellm_call_id': 'b6be49f7-1271-4e07-a7c0-67887af39282', 'litellm_logging_obj': <litellm.utils.Logging object at 0x734431425750>, 'model': 'gemma-2b', 'stream': True}]}, 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '8284'}, 'body': {'model': 'gemma-2b', 'stream': True, 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}]}}, 'preset_cache_key': None, 'no-log': False, 'stream_response': {}, 'input_cost_per_token': None, 'input_cost_per_second': None, 'output_cost_per_token': None, 'output_cost_per_second': None}, 'start_time': datetime.datetime(2024, 6, 24, 10, 38, 24, 918338), 'stream': True, 'user': None, 'call_type': 'acompletion', 'litellm_call_id': 'b6be49f7-1271-4e07-a7c0-67887af39282', 'completion_start_time': None, 'temperature': 0.1, 'max_retries': 0, 'extra_body': {}, 'input': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\nAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}], 'api_key': 'yourapikey', 'original_response': <coroutine object OpenAIChatCompletion.async_streaming at 0x7344302d9460>, 'additional_args': {'headers': None, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'acompletion': True, 'complete_input_dict': {'model': 'google/gemma-1.1-2b-it', 'messages': [{'role': 'system', 'content': "\nUse the following context as your learned knowledge, inside <context></context> XML tags.\n<context>\n AAAAAAA\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\nGiven the context information, answer the query.\nQuery: de quoi parle ce document"}, {'role': 'user', 'content': 'de quoi parle ce document'}], 'temperature': 0.1, 'stream': True, 'extra_body': {}}}, 'log_event_type': 'failed_api_call', 'exception': APIError("OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}"), 'traceback_exception': 'Traceback (most recent call last):\n File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 723, in async_streaming\n response = await openai_aclient.chat.completions.create(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create\n return await self._post(\n ^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request\n return await self._request(\n ^^^^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.BadRequestError: Error code: 400 - {\'object\': \'error\', \'message\': \'System role not supported\', \'type\': \'BadRequestError\', \'param\': None, \'code\': 400}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 340, in acompletion\n response = await init_response\n ^^^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 746, in async_streaming\n raise OpenAIError(status_code=e.status_code, message=str(e))\nlitellm.llms.openai.OpenAIError: Error code: 400 - {\'object\': \'error\', \'message\': \'System role not supported\', \'type\': \'BadRequestError\', \'param\': None, \'code\': 400}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3726, in wrapper_async\n result = await original_function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 361, in acompletion\n raise exception_type(\n ^^^^^^^^^^^^^^^\n File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 9882, in exception_type\n raise e\n File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 8633, in exception_type\n raise APIError(\nlitellm.exceptions.APIError: OpenAIException - Error code: 400 - {\'object\': \'error\', \'message\': \'System role not supported\', \'type\': \'BadRequestError\', \'param\': None, \'code\': 400}\n', 'end_time': datetime.datetime(2024, 6, 24, 10, 38, 24, 955395)} litellm | OUTPUT IN LANGFUSE: OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}; original: None litellm | Langfuse Layer Logging - final response object: None litellm | Inside Max Parallel Request Failure Hook litellm | user_api_key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b litellm | get cache: cache key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b::2024-06-24-10-38::request_count; local_only: False litellm | get cache: cache result: {'current_requests': 0, 'current_tpm': 0, 'current_rpm': 0} litellm | updated_value in failure call: {'current_requests': 0, 'current_tpm': 0, 'current_rpm': 0} litellm | set cache: key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b::2024-06-24-10-38::request_count; value: {'current_requests': 0, 'current_tpm': 0, 'current_rpm': 0} litellm | InMemoryCache: set_cache litellm | initial list of deployments: [{'model_name': 'gemma-2b', 'litellm_params': {'api_key': 'yourapikey', 'api_base': 'http://vllm-gemma-2b:5011/v1', 'model': 'openai/google/gemma-1.1-2b-it', 'temperature': 0.1, 'stream': True}, 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}}] litellm | async get cache: cache key: 08-38:cooldown_models; local_only: False litellm | in_memory_result: None litellm | get cache: cache result: None litellm | INFO: 172.22.0.15:58024 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request litellm | Initialized litellm callbacks, Async Success Callbacks: [<litellm.proxy.hooks.parallel_request_limiter._PROXY_MaxParallelRequestsHandler object at 0x73443136f690>, <litellm.proxy.hooks.tpm_rpm_limiter._PROXY_MaxTPMRPMLimiter object at 0x73443136f6d0>, <litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x7344367ed290>, <litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x734433511b10>, <litellm._service_logger.ServiceLogging object at 0x734435d96790>, <bound method SlackAlerting.response_taking_too_long_callback of <litellm.integrations.slack_alerting.SlackAlerting object at 0x73443136f710>>] litellm | self.optional_params: {} litellm | LiteLLM Proxy: Inside Proxy Logging Pre-call hook! litellm | Inside Max Parallel Request Pre-Call Hook litellm | get cache: cache key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b::2024-06-24-10-38::request_count; local_only: False litellm | get cache: cache result: {'current_requests': 0, 'current_tpm': 0, 'current_rpm': 0} litellm | current: {'current_requests': 0, 'current_tpm': 0, 'current_rpm': 0} litellm | Inside Max TPM/RPM Limiter Pre-Call Hook - token='88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b' key_name=None key_alias=None spend=0.0 max_budget=None expires=None models=[] aliases={} config={} user_id=None team_id=None max_parallel_requests=None metadata={} tpm_limit=None rpm_limit=None budget_duration=None budget_reset_at=None allowed_cache_controls=[] permissions={} model_spend={} model_max_budget={} soft_budget_cooldown=False litellm_budget_table=None org_id=None user_id_rate_limits=None team_id_rate_limits=None team_spend=None team_alias=None team_tpm_limit=None team_rpm_limit=None team_max_budget=None team_models=[] team_blocked=False soft_budget=None team_model_aliases=None api_key='88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b' user_role='proxy_admin' allowed_model_region=None litellm | get cache: cache key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b; local_only: False litellm | get cache: cache result: None litellm | _set_limits: False litellm | Inside Max Budget Limiter Pre-Call Hook litellm | get cache: cache key: None_user_api_key_user_id; local_only: False litellm | get cache: cache result: None litellm | Inside Cache Control Check Pre-Call Hook litellm | LiteLLM Proxy: final data being sent to completion call: {'model': 'gemma-2b', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'stream': False, 'max_tokens': 50, 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '633'}, 'body': {'model': 'gemma-2b', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'stream': False, 'max_tokens': 50}}, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '633'}, 'endpoint': 'http://litellm:8000/v1/chat/completions'}, 'request_timeout': 600, 'litellm_call_id': '5aa7f76f-8f57-4c35-b059-a77bc5d43938', 'litellm_logging_obj': <litellm.utils.Logging object at 0x7344287dd650>} litellm | initial list of deployments: [{'model_name': 'gemma-2b', 'litellm_params': {'api_key': 'yourapikey', 'api_base': 'http://vllm-gemma-2b:5011/v1', 'model': 'openai/google/gemma-1.1-2b-it', 'temperature': 0.1, 'stream': True}, 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}}] litellm | async get cache: cache key: 08-38:cooldown_models; local_only: False litellm | in_memory_result: None litellm | get cache: cache result: None litellm | get cache: cache key: c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881; local_only: True litellm | get cache: cache result: 3 litellm | set cache: key: c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881; value: 4 litellm | InMemoryCache: set_cache litellm | get cache: cache key: c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881_async_client; local_only: True litellm | get cache: cache result: <openai.AsyncOpenAI object at 0x734431024f90> litellm | get cache: cache key: c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881_max_parallel_requests_client; local_only: True litellm | get cache: cache result: None litellm | litellm | litellm | Request to litellm: litellm | 10:38:25 - LiteLLM:INFO: utils.py:1298 - litellm | litellm | POST Request Sent from LiteLLM: litellm | curl -X POST \ litellm | http://vllm-gemma-2b:5011/v1/ \ litellm | -H 'Authorization: Bearer yourapikey' \ litellm | -d '{'model': 'google/gemma-1.1-2b-it', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'temperature': 0.1, 'stream': False, 'max_tokens': 50, 'extra_body': {}}' litellm | litellm | litellm | litellm.acompletion(api_key='yourapikey', api_base='http://vllm-gemma-2b:5011/v1', model='openai/google/gemma-1.1-2b-it', temperature=0.1, stream=False, messages=[{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], caching=False, client=<openai.AsyncOpenAI object at 0x734431024f90>, timeout=6000, max_tokens=50, proxy_server_request={'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '633'}, 'body': {'model': 'gemma-2b', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'stream': False, 'max_tokens': 50}}, metadata={'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '633'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, request_timeout=600, litellm_call_id='5aa7f76f-8f57-4c35-b059-a77bc5d43938', litellm_logging_obj=<litellm.utils.Logging object at 0x7344287dd650>, model_info={'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, max_retries=0) litellm | litellm | litellm | ASYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache'): None litellm | Final returned optional params: {'temperature': 0.1, 'stream': False, 'max_tokens': 50, 'max_retries': 0, 'extra_body': {}} litellm | self.optional_params: {'temperature': 0.1, 'stream': False, 'max_tokens': 50, 'max_retries': 0, 'extra_body': {}} litellm | RAW RESPONSE: litellm | {"id": "cmpl-19a47d768cfb43b58f3954801ddddfc6", "choices": [{"finish_reason": "stop", "index": 0, "logprobs": null, "message": {"content": "\ud83c\udfae Video Game Development Insights", "role": "assistant", "function_call": null, "tool_calls": []}, "stop_reason": null}], "created": 1719218305, "model": "google/gemma-1.1-2b-it", "object": "chat.completion", "system_fingerprint": null, "usage": {"completion_tokens": 6, "prompt_tokens": 106, "total_tokens": 112}} litellm | litellm | litellm | Async Wrapper: Completed Call, calling async_success_handler: <bound method Logging.async_success_handler of <litellm.utils.Logging object at 0x7344287dd650>> litellm | Logging Details LiteLLM-Success Call: None litellm | Looking up model=google/gemma-1.1-2b-it in model_cost_map litellm | success callbacks: ['langfuse', <litellm.proxy.hooks.parallel_request_limiter._PROXY_MaxParallelRequestsHandler object at 0x73443136f690>, <litellm.proxy.hooks.tpm_rpm_limiter._PROXY_MaxTPMRPMLimiter object at 0x73443136f6d0>, <litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x7344367ed290>, <litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x734433511b10>, <litellm._service_logger.ServiceLogging object at 0x734435d96790>] litellm | 10:38:25 - LiteLLM:INFO: langfuse.py:182 - Langfuse Layer Logging - logging success litellm | 10:38:25 - LiteLLM Router:INFO: router.py:651 - litellm.acompletion(model=openai/google/gemma-1.1-2b-it) 200 OK litellm | Langfuse Logging - Enters logging function for model {'model': 'google/gemma-1.1-2b-it', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'optional_params': {'temperature': 0.1, 'stream': False, 'max_tokens': 50, 'max_retries': 0, 'extra_body': {}}, 'litellm_params': {'acompletion': True, 'api_key': 'yourapikey', 'force_timeout': 600, 'logger_fn': None, 'verbose': False, 'custom_llm_provider': 'openai', 'api_base': 'http://vllm-gemma-2b:5011/v1/', 'litellm_call_id': '5aa7f76f-8f57-4c35-b059-a77bc5d43938', 'model_alias_map': {}, 'completion_call_id': None, 'metadata': {'user_api_key': '88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b', 'user_api_key_alias': None, 'global_max_parallel_requests': None, 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_id': None, 'user_api_key_team_alias': None, 'user_api_key_metadata': {}, 'headers': {'host': 'litellm:8000', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '633'}, 'endpoint': 'http://litellm:8000/v1/chat/completions', 'model_group': 'gemma-2b', 'deployment': 'openai/google/gemma-1.1-2b-it', 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'api_base': 'http://vllm-gemma-2b:5011/v1', 'caching_groups': None}, 'model_info': {'id': 'c0f1d4edb983420904691e2ae0e929ac11ff957876554412974d44f2657a3881', 'db_model': False}, 'proxy_server_request': {'url': 'http://litellm:8000/v1/chat/completions', 'method': 'POST', 'headers': {'host': 'litellm:8000', 'authorization': 'Bearer sk-1234', 'content-type': 'application/json', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'user-agent': 'Python/3.11 aiohttp/3.9.5', 'content-length': '633'}, 'body': {'model': 'gemma-2b', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'stream': False, 'max_tokens': 50}}, 'preset_cache_key': None, 'no-log': False, 'stream_response': {}, 'input_cost_per_token': None, 'input_cost_per_second': None, 'output_cost_per_token': None, 'output_cost_per_second': None}, 'start_time': datetime.datetime(2024, 6, 24, 10, 38, 25, 3087), 'stream': False, 'user': None, 'call_type': 'acompletion', 'litellm_call_id': '5aa7f76f-8f57-4c35-b059-a77bc5d43938', 'completion_start_time': datetime.datetime(2024, 6, 24, 10, 38, 25, 777373), 'temperature': 0.1, 'max_tokens': 50, 'max_retries': 0, 'extra_body': {}, 'input': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'api_key': 'yourapikey', 'additional_args': {'complete_input_dict': {'model': 'google/gemma-1.1-2b-it', 'messages': [{'role': 'user', 'content': 'Here is the query:\nde quoi parle ce document\n\nCreate a concise, 3-5 word phrase with an emoji as a title for the previous query. Suitable Emojis for the summary can be used to enhance understanding but avoid quotation marks or special formatting. RESPOND ONLY WITH THE TITLE TEXT.\n\nExamples of titles:\n📉 Stock Market Trends\n🍪 Perfect Chocolate Chip Recipe\nEvolution of Music Streaming\nRemote Work Productivity Tips\nArtificial Intelligence in Healthcare\n🎮 Video Game Development Insights'}], 'temperature': 0.1, 'stream': False, 'max_tokens': 50, 'extra_body': {}}}, 'log_event_type': 'successful_api_call', 'end_time': datetime.datetime(2024, 6, 24, 10, 38, 25, 777373), 'cache_hit': None, 'response_cost': None} litellm | OUTPUT IN LANGFUSE: {'content': '🎮 Video Game Development Insights', 'role': 'assistant', 'tool_calls': []}; original: ModelResponse(id='cmpl-19a47d768cfb43b58f3954801ddddfc6', choices=[Choices(finish_reason='stop', index=0, message=Message(content='🎮 Video Game Development Insights', role='assistant', tool_calls=[]))], created=1719218305, model='google/gemma-1.1-2b-it', object='chat.completion', system_fingerprint=None, usage=Usage(completion_tokens=6, prompt_tokens=106, total_tokens=112)) litellm | Langfuse Layer Logging - logging to langfuse v2 litellm | trace: None litellm | Langfuse Layer Logging - final response object: ModelResponse(id='cmpl-19a47d768cfb43b58f3954801ddddfc6', choices=[Choices(finish_reason='stop', index=0, message=Message(content='🎮 Video Game Development Insights', role='assistant', tool_calls=[]))], created=1719218305, model='google/gemma-1.1-2b-it', object='chat.completion', system_fingerprint=None, usage=Usage(completion_tokens=6, prompt_tokens=106, total_tokens=112)) litellm | Logging Details LiteLLM-Async Success Call litellm | Looking up model=google/gemma-1.1-2b-it in model_cost_map litellm | INSIDE parallel request limiter ASYNC SUCCESS LOGGING litellm | get cache: cache key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b::2024-06-24-10-38::request_count; local_only: False litellm | get cache: cache result: {'current_requests': 0, 'current_tpm': 0, 'current_rpm': 0} litellm | updated_value in success call: {'current_requests': 0, 'current_tpm': 112, 'current_rpm': 1}, precise_minute: 2024-06-24-10-38 litellm | set cache: key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b::2024-06-24-10-38::request_count; value: {'current_requests': 0, 'current_tpm': 112, 'current_rpm': 1} litellm | InMemoryCache: set_cache litellm | INSIDE TPM RPM Limiter ASYNC SUCCESS LOGGING litellm | get cache: cache key: 88dc28d0f030c55ed4ab77ed8faf098196cb1c05df778539800c9f1243fe6b4b; local_only: False litellm | get cache: cache result: None litellm | INFO: 172.22.0.15:58034 - "POST /v1/chat/completions HTTP/1.1" 200 OK

krrishdholakia commented 2 months ago
`litellm  | 10:38:24 - LiteLLM Router:INFO: router.py:659 - litellm.acompletion(model=openai/google/gemma-1.1-2b-it) Exception OpenAIException - Error code: 400 - {'object': 'error', 'message': 'System role not supported', 'type': 'BadRequestError', 'param': None, 'code': 400}`

@flefevre the error tells you the issue. Your custom model doesn't support system messages. causing the issue.

I'll expose a flag for this, you can set to false for your model.

krrishdholakia commented 2 months ago

just checked. we already support this. add this to your model config

model_list:
- model_name: my-custom-model
   litellm_params:
      model: openai/google/gemma
      api_base: http://my-custom-base
      api_key: "" 
      supports_system_message: False # 👈 KEY CHANGE

we'll convert it to a user message - https://github.com/BerriAI/litellm/blob/151d19960e689588208feee240440a5c875dec46/litellm/main.py#L884

krrishdholakia commented 2 months ago

Can we do a quick call sometime this week?

Would love to learn how you're using LiteLLM - https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

flefevre commented 2 months ago

Dear krrishdholakia, Thanks a lot. Your analysis was right. I made the modification and it works Thanks to litellm team.

At present time, i have a huge amout of work. But I have put in my tikcet list your calendy. I will contact you during summer time.

François