BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.1k stars 1.53k forks source link

[Bug]: logged curl fragment incorrect #1655

Open danielvanmil opened 8 months ago

danielvanmil commented 8 months ago

What happened?

When is use the proxy with debugging the requests are nicely logged, like:

POST Request Sent from LiteLLM:
curl -X POST \
https://api-inference.huggingface.co/models/codellama/CodeLlama-34b-Instruct-hf \
-H 'content-type: application/json' -H 'Authorization: Bearer hf_MqMbDnyPAhQLkz********************' \
-d '{'inputs': '[INST] <<SYS>>\n\n  You are a helpful assistent, that only communicates using JSON files.\n  The expected output from you has to be: \n      { \n          "function_call":  {\n            "name": {function_name},\n            "args": [],\n            "ai_notes": {explanation}\n        }\n      }\nProduce JSON OUTPUT ONLY! The following functions are available to you:\n{\'name\': \'get_current_weather\', \'parameters\': {\'type\': \'object\', \'properties\': {\'location\': {\'type\': \'string\', \'description\': \'The city and state, e.g. San Francisco, CA\'}, \'unit\': {\'type\': \'string\', \'enum\': [\'celsius\', \'fahrenheit\']}}, \'required\': [\'location\']}, \'description\': \'Get the current weather in a given location\'}\n\n<</SYS>>\n [/INST]\n\nWhat\'s the weather like in San Francisco, Tokyo, and Paris?\n', 'parameters': {'stream': False, 'stop': ['<</SYS>>'], 'max_new_tokens': 200, 'format': 'json', 'details': True, 'return_full_text': False}, 'stream': False}'

But, it looks like the fragment is not correct as the quoting of the body is incorrect (not escaped etc.?)

How to test:

Relevant log output

No response

Twitter / LinkedIn details

No response

ishaan-jaff commented 8 months ago

Hi @danielvanmil followed up over Linkedin to better understand this issue