BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
14.61k stars 1.71k forks source link

[Feature]: OpenAI v1.0.0 Compatibility #774

Closed ishaan-jaff closed 11 months ago

ishaan-jaff commented 1 year ago

The Feature

Starting this issue to ensure LiteLLM is compatible with OpenAI v1.0.0

The main goal of this issue:

If a user has OpeAI v1.0.0 their OpenAI calls through litellm should not fail

Secondary Goal

Ensure backwards compatibility - a bulk of our users use the older OpenAI version - we should ensure that they will be unaffected by this change

ETA of Migration to OpenAI v1.0.0 compatibility

11/09/2023 End of day

UPDATE 13/11/2023: 🎉🎉 LiteLLM v1.0.0 is now in prod!!! 11/11/2023: Dev Release out now - pip install --upgrade litellm==1.0.0.dev1 - https://pypi.org/project/litellm/1.0.0.dev1/

10/11/2023: Undocumented changes caught - tracking them here -https://github.com/BerriAI/litellm/discussions/648.

dev release for openai v1 out by tomorrow night (11/11/2023).

ishaan-jaff commented 1 year ago

We're planning on moving OpenAI calls + Azure calls to calling the raw http endpoint over the next 24 hrs

This seems like the best solution to maintain backwards compatibility

pchalasani commented 1 year ago

Speaking from the pov of langroid. We've been on openai 0.x until now, where we expect all responses to be dicts. Now I'm upgrading to openai 1.x and now these responses are going to be Pydantic objects. I'm doing a phase of the upgrade where I will simply convert the returned responses to dicts using response.model_dump() as mentioned in the migration guide.

Question: Will litellm responses continue to be dicts, or will you switch to Pydantic objects? Either is fine as long as I can easily convert the Pydantic objects to dicts via model_dump().

ishaan-jaff commented 1 year ago

Another concern:

another issue is that litellm dependency on openai v0 causes a version conflict when I move langroid to openai v1, so for now I will make litellm an extra install

fcakyon commented 1 year ago

Is there any ETA? We are having issues on our autollm package due to the litelmm dependency

ishaan-jaff commented 1 year ago

We're going to have the full migration done by Thursday this week @fcakyon

ishaan-jaff commented 1 year ago

@fcakyon what's the issue you're seeing ? do you have the latest openai verison on your system which is leading to version conflicts with litellm ?

fcakyon commented 1 year ago

We had to use the latest llama-index for the latest openai api update. Now llama-index and litelmm package dependencies clashes and autollm cannot be installed :(

fcakyon commented 1 year ago

This is the error message:

ERROR: Cannot install autollm because these package versions have conflicting dependencies.

The conflict is caused by: litellm 0.12.11 depends on openai<0.29.0 and >=0.27.0 llama-index 0.8.64 depends on openai>=1.1.0

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

krrishdholakia commented 1 year ago

Working on this today @fcakyon.

I believe a temporary patch is to use an older version of llama-index to ensure no dependency conflict issues.

llama index 0.8.62 should resolve this.

krrishdholakia commented 1 year ago

Tracking changes needed to be made:

Completions

3 'providers' impacted:

Scenarios Impacted:

Embeddings

2 'providers' impacted

Moderations

1 'provider' impacted

pchalasani commented 1 year ago

Another concern:

another issue is that litellm dependency on openai v0 causes a version conflict when I move langroid to openai v1, so for now I will make litellm an extra install

To elaborate a bit on this -- I use poetry for dependency management, so even if I set up litellm as an "extra" in pyproject.toml, when I do poetry install, the poetry dependency resolver bails out because litellm requires OpenAI v0 whereas langroid requires v1. So I had to in fact comment out litellm in the pyproject.toml for now, and made a note in release notes that it cannot be used with langroid for now.

krrishdholakia commented 1 year ago

Just pushed initial commit moving azure calls to http - https://github.com/BerriAI/litellm/commit/53abc31c27938bd956f62c05d93a0db574e97874

Tested on completion, streaming and exception mapping.

krrishdholakia commented 1 year ago

OpenAI chat completion calls moved to http - c57ed0a9d702307cdeb1953579cd9cecc4b7de4c

krrishdholakia commented 1 year ago

In v0 scope

v1 scope

krrishdholakia commented 1 year ago

v0 is now pushed - 182ca83912d032b7bc39e468fc9f24e3e83bedcf

krrishdholakia commented 1 year ago

should be live soon in v0.14.0

pchalasani commented 1 year ago

should be live soon in v0.14.0

@krrishdholakia where should I look for litellm version numbers? When I see the main github page, I see 0.11.1 as latest.

fcakyon commented 1 year ago

should be live soon in v0.14.0

Still not released, right?

We appreciate your hard work 🙏

SeeknnDestroy commented 1 year ago

waiting for the release 14.0 :)

krrishdholakia commented 1 year ago

The dev release for this is now out (apologies we had ci/cd issues that needed to be resolved).

pip install litellm==0.14.0.dev1

@pchalasani @fcakyon @SeeknnDestroy

Any feedback on this release would be great

jackmpcollins commented 1 year ago

@krrishdholakia The package fails to import for me. The openai.errors module no longer exists. Looks like the exceptions are now importable from the top level and some have been renamed. Tested with openai 1.0.0 and 1.1.1.

>>> import litellm
...
.../litellm/timeout.py", line 20, in <module>
    from openai.error import Timeout
ModuleNotFoundError: No module named 'openai.error'
krrishdholakia commented 1 year ago

Hey @jackmpcollins thanks for raising this. Putting out a fix.

krrishdholakia commented 1 year ago

@jackmpcollins going deeper on this.

Looks like they changed how exceptions get raised. APIError now requires the request object (httpx client) to be passed in.

They also removed/changed certain exceptions (eg. renaming InvalidRequestError -> BadRequestError).

I initially thought we could minimize pain for users by moving to call their api endpoints, but i'm wrong.

We'll need to put out a v1 to handle this.

Tracking all the breaking changes here - https://github.com/BerriAI/litellm/discussions/648

Will aim to have a dev release out over the weekend for testing.

krrishdholakia commented 1 year ago

ETA for those tracking this issue -

we'll try and have the dev release for openai v1 out by tomorrow night.

The undocumented changes caught by v0.14.0 is what we're working on fixing. tracking them here -https://github.com/BerriAI/litellm/discussions/648.

krrishdholakia commented 1 year ago

Dev Release for openai v1 compatibility (handling openai.error issues) just released - v1.0.0.dev1

Needed to be a breaking change due to change in exception names by OpenAI. A more complete migration guide will be shared soon. But no changes have been made to the client interface in this dev release.

https://pypi.org/project/litellm/1.0.0.dev1/

cc: @clalanliu @pchalasani @jackmpcollins @fcakyon

This is an early release, so there might be unexpected issues. Any feedback here would be great.

Hoping to put out a baked in prod release early by Monday night PST.

darinkishore commented 1 year ago

Hi! @ishaan-jaff @krrishdholakia

Minor feedback, openai migration, On latest openai, litellm==1.0.0.dev1.

Completion is being called.

Traceback (most recent call last):
  File "/home/dakisho/.pyenv/versions/llama/bin/litellm", line 8, in <module>
    sys.exit(run_server())
             ^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/litellm/proxy/proxy_cli.py", line 129, in run_server
    response = openai.Completion.create(model="gpt-3.5-turbo", prompt='this is a test request, write a short poem')
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/openai/_utils/_proxy.py", line 22, in __getattr__
    return getattr(self.__get_proxied__(), attr)
                   ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/openai/_utils/_proxy.py", line 43, in __get_proxied__
    return self.__load__()
           ^^^^^^^^^^^^^^^
  File "/home/dakisho/.pyenv/versions/3.11.6/envs/llama/lib/python3.11/site-packages/openai/lib/_old_api.py", line 33, in __load__
    raise APIRemovedInV1(symbol=self._symbol)
openai.lib._old_api.APIRemovedInV1: 

You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Config:

model_list:
  - model_name: zephyr-beta
    litellm_params: # params for litellm.completion() - https://docs.litellm.ai/docs/completion/input#input---request-body
      model: huggingface/HuggingFaceH4/zephyr-7b-beta
      api_base: http://0.0.0.0:8001
      initial_prompt_value: "\n</s>\n" # adjusted as per your function
      roles:
        system:
          pre_message: "<|system|>\\n"
          post_message: "</s>\n"
        user:
          pre_message: "<|user|>\n"
          post_message: "</s>\n"
        assistant:
          pre_message: "<|assistant|>\n"
          post_message: "</s>\n" # Adding a newline at the end as in your function
      final_prompt_value: "<|assistant|>\n" # To ensure the prompt ends correctly

litellm_settings:
  drop_params: True
  set_verbose: True

ran litellm --test.

jackmpcollins commented 1 year ago

Thanks @krrishdholakia . I have one small issue remaining. The APIStatusError super init method is missing some keyword args here (I suspect some other exceptions in this file might have the same issues). The RateLimitError a few lines above has these keyword args provided.

https://github.com/BerriAI/litellm/blob/62013520aaa3af9c322c851ddeb292321427ece3/litellm/exceptions.py#L90-L92

Traceback:

self = ServiceUnavailableError()
message = 'AnthropicException - Function calling is not supported by anthropic. To add it to the prompt, set `litellm.add_function_to_prompt = True`.'
llm_provider = 'anthropic', model = 'claude-2'

    def __init__(self, message, llm_provider, model):
        self.status_code = 503
        self.message = message
        self.llm_provider = llm_provider
        self.model = model
>       super().__init__(
            self.message
        )  # Call the base class constructor with the parameters it needs
E       TypeError: APIStatusError.__init__() missing 2 required keyword-only arguments: 'response' and 'body'

.venv/lib/python3.10/site-packages/litellm/exceptions.py:90: TypeError
pchalasani commented 1 year ago

Any feedback here would be great.

Hi folks, thanks for working on this. As of now I am still seeing this error. Hope this helps.

Python 3.11.5 (main, Aug 24 2023, 15:09:45) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.17.2 -- An enhanced Interactive Python. Type '?' for help.

In [1]: !pip show litellm
/Users/pchalasani/Git/langroid/.venv/lib/python3.11/site-packages/IPython/core/interactiveshell.py:2644: UserWarning: You executed the system command !pip which may not work as expected. Try the IPython magic %pip instead.
  warnings.warn(
Name: litellm
Version: 0.14.1
Summary: Library to easily interface with LLM API providers
Home-page:
Author: BerriAI
Author-email:
License: MIT
Location: /Users/pchalasani/Git/langroid/.venv/lib/python3.11/site-packages
Requires: aiohttp, appdirs, certifi, click, importlib-metadata, jinja2, openai, python-dotenv, tiktoken, tokenizers
Required-by: langroid

In [2]: import litellm
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[2], line 1
----> 1 import litellm

File ~/Git/langroid/.venv/lib/python3.11/site-packages/litellm/__init__.py:332
    322 cohere_embedding_models: List = [
    323     "embed-english-v3.0",
    324     "embed-english-light-v3.0",
   (...)
    328     "embed-multilingual-v2.0",
    329 ]
    330 bedrock_embedding_models: List = ["amazon.titan-embed-text-v1"]
--> 332 from .timeout import timeout
    333 from .testing import *
    334 from .utils import (
    335     client,
    336     exception_type,
   (...)
    354     decode
    355 )

File ~/Git/langroid/.venv/lib/python3.11/site-packages/litellm/timeout.py:20
     18 from functools import wraps
     19 from threading import Thread
---> 20 from openai.error import Timeout
     23 def timeout(timeout_duration: float = 0.0, exception_to_raise=Timeout):
     24     """
     25     Wraps a function to raise the specified exception if execution time
     26     is greater than the specified timeout.
   (...)
     35         :rtype: callable
     36     """

ModuleNotFoundError: No module named 'openai.error'
vividfog commented 1 year ago

On latest 1.0.0.dev1, OpenAI 1.2.3, I get successful responses from /models and (/v1)/chat/completions but not from (/v1)/completions. Testing with Postman.

POST http://localhost:8000/v1/completions
{
    "model": "ollama/mistral",
    "prompt": "Here is a short completion of this sentence: ",
    "temperature": 0.2,
    "top_p": 0.5
}

This is the output:

INFO:     127.0.0.1:51847 - "POST /completions HTTP/1.1" 200 OK
An error occurred: 'TextCompletionResponse' object has no attribute '__fields_set__'

 Debug this by setting `--debug`, e.g. `litellm --model gpt-3.5-turbo --debug`
'TextCompletionResponse' object has no attribute '__fields_set__'

Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.11/site-packages/litellm/proxy/proxy_server.py", line 512, in completion
    return litellm_completion(
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.11/site-packages/litellm/proxy/proxy_server.py", line 446, in litellm_completion
    response = litellm.text_completion(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.11/site-packages/litellm/main.py", line 1888, in text_completion
    text_completion_response = TextCompletionResponse()
                               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.11/site-packages/litellm/utils.py", line 392, in __init__
    self.object = "text_completion"
    ^^^^^^^^^^^
  File "pydantic/main.py", line 405, in pydantic.main.BaseModel.__setattr__
AttributeError: 'TextCompletionResponse' object has no attribute '__fields_set__'

These test calls work OK with past LiteLLM and OpenAI Python libraries before devday.

krrishdholakia commented 1 year ago

@darinkishore tracking your issue here - https://github.com/BerriAI/litellm/issues/803

krrishdholakia commented 1 year ago

@pchalasani that doesn't look like the latest version of litellm. For context here's our timeout.py file now -

Screenshot 2023-11-13 at 7 58 01 AM

https://github.com/BerriAI/litellm/blob/main/litellm/timeout.py#L20

can you please upgrade to v1.0.0.dev1

krrishdholakia commented 1 year ago

@vividfog i can repro the issue in our testing pipeline. I'm working on fixing these issues today. Aim to have a stable release out by tonight.

Screenshot 2023-11-13 at 8 01 38 AM
pchalasani commented 1 year ago

can you please upgrade to v1.0.0.dev1

EDITED: ok with this the import works fine, and a simple test with ollama/mistral works fine in non-stream mode.

But in stream mode I only get the first word, and finish_reason=stop so my stream parsing exits.

Here is the choices object

image
krrishdholakia commented 1 year ago

odd - thanks for sharing @pchalasani

i'll get this fixed in today's stable release.

SeeknnDestroy commented 1 year ago

Hi there, thank you for your work @krrishdholakia,

I am still unable to work with latest version of the llama-index, can't run the code from their example. Getting the following error:

Traceback (most recent call last):
  File "/home/seekndestroy/mambaforge/envs/autollm/lib/python3.10/site-packages/pydantic/main.py", line 751, in __getattr__
    return pydantic_extra[item]
KeyError: 'pop'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/seekndestroy/safeVideo/github/autollm/testing.py", line 15, in <module>
    chat_response = llm.chat([message])
  File "/home/seekndestroy/mambaforge/envs/autollm/lib/python3.10/site-packages/llama_index/llms/base.py", line 187, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
  File "/home/seekndestroy/mambaforge/envs/autollm/lib/python3.10/site-packages/llama_index/llms/litellm.py", line 123, in chat
    return chat_fn(messages, **kwargs)
  File "/home/seekndestroy/mambaforge/envs/autollm/lib/python3.10/site-packages/llama_index/llms/litellm.py", line 195, in _chat
    message = from_openai_message_dict(message_dict)
  File "/home/seekndestroy/mambaforge/envs/autollm/lib/python3.10/site-packages/llama_index/llms/litellm_utils.py", line 167, in from_openai_message_dict
    additional_kwargs.pop("role")
  File "/home/seekndestroy/mambaforge/envs/autollm/lib/python3.10/site-packages/pydantic/main.py", line 753, in __getattr__
    raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc
AttributeError: 'Message' object has no attribute 'pop'
ishaan-jaff commented 1 year ago

@vividfog fixed the error for TextCompletion on proxy

ishaan-jaff commented 1 year ago

@darinkishore fixed the error for the proxy cli --test https://github.com/BerriAI/litellm/commit/e125414611347c5291c1c175c0bf370a56fea110

gururise commented 1 year ago

Getting an error trying to do inference with Together.ai (Mistral 7b-Instruct) model:

openai==1.2.4 langchain==0.0.335 litellm==1.0.0.dev1

relevant code snippet:

    _input = prompt.format_prompt(query=query)
    stop_sequence = ["\n\n","</s>"]
    model = ChatLiteLLM(model="together_ai/mistralai/Mistral-7B-Instruct-v0.1",max_tokens=1600,verbose=True,temperature=0.01,stop_sequence=stop_sequence, model_kwargs={"stop": stop_sequence})
    chain = LLMChain(llm=model, prompt=prompt)
    output = await chain.arun(_input.to_string())  # Throwing an error here

Output

Task exception was never retrieved

future: <Task finished name='Task-53' coro=<summarize() done, defined at /code/app/chatflow/webhook.py:301> exception=AttributeError("module 'openai' has no attribute 'error'")>

Traceback (most recent call last):

  File "/code/app/chatflow/webhook.py", line 303, in summarize
    summary : Summary = await get_pydantic_chain_summary(chat_history) # Mistral-7b-Instruct (togetherai)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/code/app/chatflow/helpers.py", line 99, in get_pydantic_chain_summary
    output = await chain.arun(_input.to_string())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 579, in arun
    await self.acall(
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 379, in acall
    raise e
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 373, in acall
    await self._acall(inputs, run_manager=run_manager)
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 280, in _acall
    response = await self.agenerate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 147, in agenerate
    return await self.llm.agenerate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chat_models/base.py", line 469, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chat_models/base.py", line 429, in agenerate
    raise exceptions[0]
  File "/usr/local/lib/python3.11/site-packages/langchain/chat_models/base.py", line 545, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chat_models/litellm.py", line 402, in _agenerate
    response = await acompletion_with_retry(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chat_models/litellm.py", line 106, in acompletion_with_retry
    retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chat_models/litellm.py", line 68, in _create_retry_decorator
    openai.error.Timeout,
    ^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'error'
MikeyBeez commented 1 year ago

Me too: ❯ python app3.py /Users/bard/test2/app3.py:9: SyntaxWarning: str indices must be integers or slices, not str; perhaps you missed a comma? return response['choices'[0]['message']['content']] Traceback (most recent call last): File "/Users/bard/test2/app3.py", line 1, in import litellm File "/Users/bard/anaconda3/envs/litellm/lib/python3.11/site-packages/litellm/init.py", line 332, in from .timeout import timeout File "/Users/bard/anaconda3/envs/litellm/lib/python3.11/site-packages/litellm/timeout.py", line 20, in from openai.error import Timeout ModuleNotFoundError: No module named 'openai.error' ❯ pip list Package Version


aiohttp 3.8.6 aiosignal 1.3.1 annotated-types 0.6.0 anyio 3.7.1 appdirs 1.4.4 async-timeout 4.0.3 attrs 23.1.0 boto3 1.28.85 botocore 1.31.85 certifi 2023.7.22 charset-normalizer 3.3.2 click 8.1.7 distro 1.8.0 fastapi 0.104.1 filelock 3.13.1 frozenlist 1.4.0 fsspec 2023.10.0 h11 0.14.0 httpcore 1.0.2 httpx 0.25.1 huggingface-hub 0.17.3 idna 3.4 importlib-metadata 6.8.0 Jinja2 3.1.2 jmespath 1.0.1 litellm 0.14.0.dev1 MarkupSafe 2.1.3 multidict 6.0.4 openai 1.2.3 packaging 23.2 pip 23.3 pydantic 2.5.0 pydantic_core 2.14.1 python-dateutil 2.8.2 python-dotenv 1.0.0 PyYAML 6.0.1 redis 5.0.1 regex 2023.10.3 requests 2.31.0 s3transfer 0.7.0 setuptools 68.0.0 six 1.16.0 sniffio 1.3.0 starlette 0.27.0 tiktoken 0.5.1 tokenizers 0.14.1 tqdm 4.66.1 typing_extensions 4.8.0 urllib3 2.0.7 uvicorn 0.24.0.post1 wheel 0.41.2 yarl 1.9.2 zipp 3.17.0

BTW, I really hate openai for doing this to everyone. They are either not careful or greedy. The pythonic way is to deprecate a function before removing it, as I'm sure they know.

krrishdholakia commented 1 year ago

@gururise we've submitted an update to langchain to fix this on the chatlitellm integration - https://github.com/langchain-ai/langchain/pull/13307

@MikeyBeez i believe your litellm version is 0.14.0.dev1, could you upgrade to 1.0.0.dev1 and let me know if that solves resolves your problem?

krrishdholakia commented 1 year ago

I'm also working on getting a stable release out by EOD

krrishdholakia commented 1 year ago

@pchalasani @gururise @jackmpcollins @MikeyBeez @vividfog @SeeknnDestroy LiteLLM v1 is now in production

pip install --upgrade litellm==1.0.0

Closing this ticket - as the initial purpose has been met. If we've missed any issues - please file them as bugs. We're closely monitoring this thread + release.

MikeyBeez commented 1 year ago

Thanks! Awesome!

jackmpcollins commented 1 year ago

Opened PR https://github.com/BerriAI/litellm/pull/813 to fix the issue with exceptions I mentioned above https://github.com/BerriAI/litellm/issues/774#issuecomment-1807369345 . I think many of the other exceptions have similar issues.

ishaan-jaff commented 1 year ago

@jackmpcollins thanks for the PR - just reviewed it

fcakyon commented 1 year ago

Hey @ishaan-jaff @krrishdholakia, we love litellm in autollm but the llama-index LiteLMM class needs an update, its not workling with the latest litellm: https://github.com/run-llama/llama_index/issues/8906

Do you have any plans to help with this?

pchalasani commented 1 year ago

LiteLLM v1 is now in production

great work! my quick tests are passing

MikeyBeez commented 1 year ago

You guys are terrific. You are doing so much to help the community march towards AGI. Fantastic!

manisenkov commented 1 year ago

Hi Team! Do I understand correctly that OpenAI proxy doesn't support /embeddings endpoint yet? It is mentioned in docs, but I don't see the implementation in proxy_server.py.

In addition - any plans for proxy the Ollama embeddings?

guiramos commented 1 year ago

Do we have a timeline/roadmap for incorporating openai's assistant api?