issues
search
BerriAI
/
litellm
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.11k
stars
1.13k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
[Feat] add endpoint to debug memory util
#4364
ishaan-jaff
closed
3 days ago
1
[Bug]: How do I stop import litellm from loading .env?
#4361
paul-gauthier
opened
6 days ago
1
fix - Can't access /v1/audio/speech with some user key
#4360
ishaan-jaff
closed
6 days ago
1
Support for Redis Clusters. LiteLLM currently only supports Redis Standalone nodes.
#4358
ishaan-jaff
opened
6 days ago
0
[Security Fix - Proxy Server ADMIN UI] - Store credentials in cookies + use strong JWT signing secret
#4357
ishaan-jaff
closed
6 days ago
1
[Test] Test routes on LiteLLM Proxy always includes OpenAI Routes
#4356
ishaan-jaff
closed
6 days ago
1
[Bug]: Assertion Error while using google models via OpenRouter
#4355
kushalsharma
opened
1 week ago
0
[Bug/feature]: Support `additional_drop_params` for embedding models
#4354
Manouchehri
opened
1 week ago
0
[Feature]: Support new Amazon Multimodal Embedding model
#4353
krrishdholakia
opened
1 week ago
0
Disable message redaction in logs via request header
#4352
msabramo
closed
6 days ago
5
[Fix] Azure check if `api_version` version supports `response_format`
#4351
ishaan-jaff
opened
1 week ago
1
Print content window fallbacks on startup to help verify configuration
#4350
lolsborn
closed
1 week ago
1
feat(dynamic_rate_limiter.py): Dynamic tpm quota (multiple projects)
#4349
krrishdholakia
closed
6 days ago
1
[Bug]: Anthropic not throwing`BadRequestError` when bad tool calls
#4348
jamesbraza
closed
1 week ago
2
Is litellm.encode() accurate for Claude 3.5 Sonnet?
#4347
paul-gauthier
closed
1 week ago
4
[Feature]: Bedrock httpx to support 'AWS_SESSION_TOKEN'
#4346
bschulth
opened
1 week ago
7
[Fix + Test] - Spend tags not getting stored on 1.40.9
#4345
ishaan-jaff
closed
1 week ago
1
refactor(litellm_logging.py): refactors how slack_alerting generates langfuse trace url
#4344
krrishdholakia
closed
1 week ago
1
ci(config.yml): add pytest-xdist
#4343
krrishdholakia
opened
1 week ago
1
fix - liteLLM proxy /moderations endpoint returns 500 error when model is not specified
#4342
ishaan-jaff
closed
1 week ago
1
[Bug]: can't run tests with `pytest` from repo root
#4341
jamesbraza
closed
6 days ago
2
[Feat] Admin UI - Show Cache hit stats
#4340
ishaan-jaff
closed
1 week ago
1
[Bug] Gemini streaming chunks can only receive one chunk:
#4339
jh10001
closed
6 days ago
7
[Bug]: Bedrock throttling errors don't seem to report to Langfuse?
#4338
Manouchehri
closed
6 days ago
3
fix: use per-token costs for claude via vertex_ai
#4337
spdustin
closed
1 week ago
1
[Bug]: liteLLM proxy /moderations endpoint returns 500 error when model is not specified
#4336
malagna-amplify
closed
1 week ago
5
Add response cost in model response (headers/hidden params)
#4335
krrishdholakia
closed
1 day ago
1
[Bug]: Spend tags metadata storage in `v1.40.9-stable` is broken
#4334
rahulvbrahmal-sigtech
closed
1 week ago
3
[Bug]: Gemini breaks when passing presence_penalty or frequency_penalty parameters
#4333
toniengelhardt
closed
1 week ago
0
[Bug]: `content-type` is wrong for tts
#4332
Manouchehri
opened
1 week ago
0
[Feature]: Add caching for tts models
#4331
Manouchehri
opened
1 week ago
0
[Bug]: Langfuse logger doesn't work for tts models
#4330
Manouchehri
opened
1 week ago
0
[Bug]: Can't access `/v1/audio/speech` with some user keys
#4329
Manouchehri
closed
6 days ago
1
test(test_python_38.py): add coverage for non-gen settings config.yaml flow
#4328
krrishdholakia
closed
1 week ago
1
fix(vertex_httpx.py): support sending extra_headers
#4327
Manouchehri
closed
1 week ago
5
[Feature]: Support bedrock offician tool usage
#4326
luccazifood
closed
1 week ago
3
Update proxy_cli.py
#4325
vanpelt
closed
1 week ago
8
[Bug]: The latest release broke the proxy command when specifying a config
#4324
vanpelt
closed
1 week ago
0
fix(key_management_endpoints.py): use common _duration_in_seconds function
#4323
krrishdholakia
closed
1 week ago
1
[Fix] user field and user_api_key_* is sometimes omitted randomly
#4322
ishaan-jaff
closed
1 week ago
1
fix(user_api_key_auth.py): ensure user has access to fallback models
#4321
krrishdholakia
closed
1 week ago
1
docs - add algolia search 🫡
#4320
ishaan-jaff
closed
1 week ago
1
[Bug]: Embedding output missing from langfuse
#4319
Manouchehri
opened
1 week ago
1
[Feat] allow using custom router strategy
#4318
ishaan-jaff
closed
1 week ago
1
[Feature]: Support setting custom `api_base` for `vertex_ai_beta` models
#4317
Manouchehri
closed
1 week ago
11
[Feature]: Separate alerting by team
#4316
krrishdholakia
opened
1 week ago
0
[Feature]: Proxy langfuse requests
#4315
krrishdholakia
closed
10 hours ago
2
[Bug]: Cleanup error message tpm/rpm limit hit -> `max_parallel_requests limit reached`
#4314
krrishdholakia
opened
1 week ago
0
fix(utils.py): allow dropping specific openai params
#4313
krrishdholakia
closed
1 week ago
3
[Bug]: Long Azure OpenAI streaming errors not logged on Langfuse (`incomplete chunked read`)
#4312
Manouchehri
opened
1 week ago
7
Previous
Next