Closed gitwittidbit closed 2 months ago
Seems the jury is still out on whether this is a bug...
It seems this error is similar to https://github.com/BerriAI/litellm/issues/3820 @gitwittidbit can you verify if this issue is still occurring? else we can close this issue
What happened?
It appears a bug happened - but it may well be attributable to my incompetence...
I am using this in my docker-compose.yml
litellm: container_name: litellm image: litellm/litellm:v1.35.31 volumes:
And this is my .env:
OLLAMA_BASE_URL='http://localhost:11434'
SCARF_NO_ANALYTICS=true DO_NOT_TRACK=true
LITELLM_LOCAL_MODEL_COST_MAP="True"
And this is my litellm-config.yaml:
model_list:
litellm_settings:
drop_params: True max_budget: 100 budget_duration: 30d
cache: True # set cache responses to True, litellm defaults to using a redis cache cache_params: # cache_params are optional type: "local" # The type of cache to initialize. Can be "local" or "redis". Defaults to "local". supported_call_types: ["acompletion", "completion", "embedding", "aembedding"] # defaults to all litellm call types
general_settings: master_key: sk-1234
Relevant log output
Twitter / LinkedIn details
No response