Open devdev999 opened 1 month ago
hey @devdev999 are you able to tell what version of prisma is running?
@devdev999 Can you try running the litellm-database
dockerfile - https://github.com/BerriAI/litellm/pkgs/container/litellm-database
It pre-generates prisma, which i think might solve this issue
What is the difference between litellm and litellm-database?
Thank you.
Hey @devdev999 i was able to fix this by using litellm's dockerfile as a base image and running prisma generate as part of my dockerfile -
# Use the provided base image
FROM ghcr.io/berriai/litellm:main-latest
# Set the working directory to /app
WORKDIR /app
### [👇 KEY STEP] ###
# Install Prisma CLI and generate Prisma client
RUN pip install prisma
RUN prisma generate
### FIN ####
# Expose the necessary port
EXPOSE 4000
# Override the CMD instruction with your desired command and arguments
# WARNING: FOR PROD DO NOT USE `--detailed_debug` it slows down response times, instead use the following CMD
# CMD ["--port", "4000", "--config", "config.yaml"]
# Define the command to run your app
ENTRYPOINT ["litellm"]
CMD ["--port", "4000"]
Docs: https://docs.litellm.ai/docs/proxy/deploy#litellm-without-internet-connection
I get the same error still even after running the image built using this Dockerfile, how do I verify that that the prisma binaries are inside the image? Where are they stored?
i ran prisma --version
inside the docker terminal
this should show you the binaries
if you see it print out installing node...
then you know it's not installed
Verified that it has the binaries in the built image, still getting the same container startup error and requester_ip_address not found. Tried both ghcr.io/berriai/litellm:main-latest and ghcr.io/berriai/litellm:main-v1.42.5-stable as base image
prisma : 5.4.2
@prisma/client : Not found
Current platform : debian-openssl-3.0.x
Query Engine (Binary) : query-engine ac9d7041ed77bcc8a8dbd2ab6616b39013829574 (at .prisma/.cache/prisma-python/binaries/5.4.2/ac9d7041ed77bcc8a8dbd2ab6616b39013829574/node_modules/@prisma/engines/query-engine-debian-openssl-3.0.x)
Schema Engine : schema-engine-cli ac9d7041ed77bcc8a8dbd2ab6616b39013829574 (at .prisma/.cache/prisma-python/binaries/5.4.2/ac9d7041ed77bcc8a8dbd2ab6616b39013829574/node_modules/@prisma/engines/schema-engine-debian-openssl-3.0.x)
Schema Wasm : @prisma/prisma-schema-wasm 5.4.1-2.ac9d7041ed77bcc8a8dbd2ab6616b39013829574
Default Engines Hash : ac9d7041ed77bcc8a8dbd2ab6616b39013829574
Studio : 0.494.0
hey @devdev999 unable to repro the same behaviour. Here's a simple test case:
my base test for this is
Create a simple dockerfile
# Use the provided base image
FROM ghcr.io/berriai/litellm:main-latest
# Set the working directory to /app
WORKDIR /app
### [👇 KEY STEP] ###
# Install Prisma CLI and generate Prisma client
RUN pip install prisma
RUN prisma generate
### FIN ####
# Expose the necessary port
EXPOSE 4000
# Override the CMD instruction with your desired command and arguments
# WARNING: FOR PROD DO NOT USE `--detailed_debug` it slows down response times, instead use the following CMD
# CMD ["--port", "4000", "--config", "config.yaml"]
# Define the command to run your app
ENTRYPOINT ["litellm"]
CMD ["--port", "4000"]
docker network create --internal my_internal_network
docker run --network my_internal_network -p 4000:4000 my-docker-image
This script works for me without issues (no startup errors)
Tried your example and it indeed works fine. The error only seems to occur when connecting to Postgres as external DB.
Reproducible example docker-compose.yml:
services:
litellm:
image: ghcr.io/berriai/litellm:main-v1.42.5-stable-patched-latest
ports:
- "4000:4000" # Map the container port to the host, change the host port if necessary
environment:
DATABASE_URL: "postgresql://llmproxy:dbpassword9090@db:5432/litellm"
networks:
- my_internal_network
db:
image: postgres:16.3
restart: always
environment:
POSTGRES_DB: litellm
POSTGRES_USER: llmproxy
POSTGRES_PASSWORD: dbpassword9090
networks:
- my_internal_network
networks:
my_internal_network:
external: true
Seems to be linked to #4915
@krrishdholakia Traced the docker image versions that this started happening to 1.41.18, it did not encounter any prisma startup errors on 1.14.17. From the changelog https://github.com/BerriAI/litellm/compare/v1.41.17...v1.41.18 and tag https://github.com/BerriAI/litellm/releases/tag/v1.41.18 the only prisma related change was in #4640
hi @devdev999 thanks for your work - we reverted #4640. Will check if issues persists on new release
@devdev999 can we setup a 1:1 support channel - I'd love to prioritize your issues:
What happened?
I am getting error of
prisma.errors.DataError: The column 'requester_ip_address' does not exist in the current database.
The proxy still works but latest spend is not being tracked. The LiteLLM container does not have access to the internet. From what I can gather from the logs it is trying to install some dependency from online which then fails.There are some errors with prisma on LiteLLM container startup:
Relevant log output
Twitter / LinkedIn details
No response