-
### What happened?
According to the docs, seed is supported by Anthropic:
https://docs.litellm.ai/docs/completion/input#translated-openai-params
However, calling acompletion:
```
litellm.acompl…
-
### What happened?
I ran this, following the [docs](https://docs.litellm.ai/docs/observability/langsmith_integration) but it didn't register any trace.
Using langsmith's `@traceable` decorator doe…
-
Hello,
I followed the instructions in the README and changed the installation command to pip install appworld[experiments]. I also configured everything strictly according to the README. After …
-
-
How to run AgentSims with LLama2
-
### Which API Provider are you using?
OpenAI Compatible (through LiteLLM)
### Which Model are you using?
Claude 3 Opus, Claude 3.5 Sonnet, Claude 3 Haiku
### What happened?
First, thank…
-
### The Feature
Azure OpenAI only returns back the model family name (like `gpt-4` instead of `gpt-4-vision-preview`), not the actual model name. Like #1810, overwrite what is returned for the `model…
-
While setting up LiteLLM to use with the Azure Vision enhancements, the documentation suggests to use the `/extensions` as part of the URL. However, for the latest recommended preview version `2024-02…
-
I'm using Phoenix to trace LLM calls in a FastAPI application that utilizes LiteLLM. When running the application, LLM calls work correctly, and responses are returned as expected. However, traces are…
-
I am using the `litellm` client to benchmark a HuggingFace TGI server.
In `token_benchmark_ray.py`, `req_launcher.get_next_ready()` is called periodically to fetch pending **results**, with the `bl…