aws-samples / aws-genai-llm-chatbot

A modular and comprehensive solution to deploy a Multi-LLM and Multi-RAG powered chatbot (Amazon Bedrock, Anthropic, HuggingFace, OpenAI, Meta, AI21, Cohere, Mistral) using AWS CDK on AWS
https://aws-samples.github.io/aws-genai-llm-chatbot/
MIT No Attribution
1.05k stars 319 forks source link

Container Image size is too big for lambda #38

Closed Rionel-Dmello closed 12 months ago

Rionel-Dmello commented 1 year ago

Hi, When deploying this application with all the RAG stacks enabled, I get this error from cloudformation during the deployment

"Lambda function AwsGenaiLllmChatbotStack-OpenSearchVectorSearchInd-RvPI85ZArJDJ reached terminal FAILED state due to InvalidImage(SizeLimitExceeded: Uncompressed container imag
e size exceeds 10 GiB limit) and failed to stabilize"

I believe that we exceed the maximum size of lambda containers. This happens for the Aurora and Opensearch RAG sources.

JahnKhan commented 1 year ago

Hi, i enabled only pgvector RAG source and the deployment is stuck during the publish process. Have you experienced something similar? It just takes hours and then runs into a timeout. It does retrying but without success.. it could be maybe because of the large size of the container?

UPDATE: I switched to a faster internet connection and the deployment went through. It took 30 min and there are 3 containers in ECR the largest has 6150.83 MB but this is only for the pgvector RAG. I have not tried out the open search vector stuff

gregsomm commented 1 year ago

I hit the same error when attempting to use OpenSearch:

34/53 Currently in progress: AwsGenaiLllmChatbotStack, OpenSearchVectorSearchApiHandler6B246D88, OpenSearchVectorSearchCreateIndex7EED590A, OpenSearchVectorSearchIndexDocument36B9A569
AwsGenaiLllmChatbotStack | 34/53 | 2:52:03 PM | CREATE_FAILED        | AWS::Lambda::Function                           | OpenSearchVectorSearch/IndexDocument (OpenSearchVectorSearchIndexDocument36B9A569) Resource handler returned message: "Lambda function AwsGenaiLllmChatbotStack-OpenSearchVectorSearchInd-7ZOZdnMIEY1f reached terminal FAILED state due to InvalidImage(SizeLimitExceeded: Uncompressed container image size exceeds 10 GiB limit) and failed to stabilize" (RequestToken: f8e7a35a-da2e-bc26-b3d6-af6c6f55add2, HandlerErrorCode: NotStabilized)
AwsGenaiLllmChatbotStack | 34/53 | 2:52:04 PM | CREATE_FAILED        | AWS::Lambda::Function                           | OpenSearchVectorSearch/ApiHandler (OpenSearchVectorSearchApiHandler6B246D88) Resource creation cancelled
AwsGenaiLllmChatbotStack | 34/53 | 2:52:04 PM | CREATE_FAILED        | AWS::Lambda::Function                           | OpenSearchVectorSearch/CreateIndex (OpenSearchVectorSearchCreateIndex7EED590A) Resource creation cancelled

Any clues about what's happening? I assume one of the packages has grown recently, and that's causing the build to exceed 10 GB...

chloe-kwak commented 1 year ago

I also hit the same error when creating PGVector during deplyment due to the container size (over 10GiB). 1:57:39 PM | CREATE_FAILED | AWS::Lambda::Function | AuroraPgVector/Doc...g/DocumentIndexing Resource handler returned message: "Lambda function AwsGenaiLllmChatbotStack-AuroraPgVectorDocumentInd-GSLQpWzr9KTr reached terminal FAILED state due to InvalidImage(SizeL imitExceeded: Uncompressed container image size exceeds 10 GiB limit) and failed to stabilize" (RequestToken: aba5d2f6-c030-ba0a-d5e9-189946108dbc, HandlerErrorCode: NotSt abilized)

QuinnGT commented 1 year ago

Same error.

Resource handler returned message: "Lambda function AwsGenaiLllmChatbotStack-OpenSearchVectorSearchInd-jlhdiMLnzFoE reached terminal FAILED state due to Inv
alidImage(SizeLimitExceeded: Uncompressed container image size exceeds 10 GiB limit) and failed to stabilize" (RequestToken: c9434bc1-73a0-2b42-b9a5-9ab1993
71783, HandlerErrorCode: NotStabilized)
MirandaDora commented 1 year ago

same issue

gregsomm commented 1 year ago

FYI, a colleague of mine found that locking to previous versions is a workaround:

aws_lambda_powertools
aws_xray_sdk
boto3-1.28.21-py3-none-any.whl
botocore-1.31.21-py3-none-any.whl
langchain==0.0.288
opensearch-py
requests_aws4auth
unstructured[all-docs]==0.10.14

However, I will say that this is NOT a good solution, since you never get later versions of the libraries in question. But if you need to get something working right away, this might help.

bigadsoleiman commented 1 year ago

Thanks @gregsomm.

unstructured paired with langchain is breaching the current containerised lambda limit size of 10G.

In the next release we expect to publish in a few days we are moving the workflow for handling documents with unstructured away from lambda so that the solution can keep benefit from future features and new document types support.

Until then locking the version is the way to go

Rionel-Dmello commented 1 year ago

Hi @bigadsoleiman I tried pulling the latest commit and building it. It still fails when deploying the pgvector stack with the error Lambda function AwsGenaiLllmChatbotStack-AuroraPgVectorDocumentInd-m9rbn5M19Wdn reached terminal FAIL ED state due to InvalidImage(SizeLimitExceeded: Uncompressed container image size exceeds 10 GiB limit) and failed to stabilize" (Request Token: 7a72dca0-4d75-d02e-a14e-f3163f4d4478, HandlerErrorCode: NotStabilized)

Rionel-Dmello commented 1 year ago

Okay so with the combination of @bigadsoleiman fix and also changing langchain==0.0.288 everywhere, I was able to deploy the Kendra and Opensearch stacks successfully. PGVector still fails with the size > 10 GB. Also changed the Dockerfile for Opensearch to the file below with @koushal2018 suggestion

FROM public.ecr.aws/lambda/python:3.11
RUN yum update -y && \
    yum install -y git wget gcc rustc rustup cargo libxml2-devel libxslt-devel libmagic-dev poppler-utils tesseract-ocr libreoffice pandoc && \
    yum clean all && \
    rm -rf /var/cache/yum
COPY dependencies/* ./
COPY requirements.txt ./
COPY index.py ./
RUN pip install --upgrade pip && \
    pip install --no-cache-dir unstructured[feature1,feature2] && \
    pip install --no-cache-dir -r requirements.txt --upgrade
RUN python -c "import nltk;nltk.download('punkt', download_dir='/home/sbx_user1051/nltk_data')" && \
    python -c "import nltk;nltk.download('averaged_perceptron_tagger', download_dir='/home/sbx_user1051/nltk_data')"
CMD ["index.lambda_handler"]
QuinnGT commented 1 year ago

Okay so with the combination of @bigadsoleiman fix and also changing langchain==0.0.288 everywhere. Also changed the Dockerfile for Opensearch to

FROM public.ecr.aws/lambda/python:3.11
RUN yum update -y && \
    yum install -y git wget gcc rustc rustup cargo libxml2-devel libxslt-devel libmagic-dev poppler-utils tesseract-ocr libreoffice pandoc && \
    yum clean all && \
    rm -rf /var/cache/yum
COPY dependencies/* ./
COPY requirements.txt ./
COPY index.py ./
RUN pip install --upgrade pip && \
    pip install --no-cache-dir unstructured[feature1,feature2] && \
    pip install --no-cache-dir -r requirements.txt --upgrade
RUN python -c "import nltk;nltk.download('punkt', download_dir='/home/sbx_user1051/nltk_data')" && \
    python -c "import nltk;nltk.download('averaged_perceptron_tagger', download_dir='/home/sbx_user1051/nltk_data')"
CMD ["index.lambda_handler"]

@Rionel-Dmello did this resolve the size issue or were you just clarifying this is what you used that generated the error?

Rionel-Dmello commented 1 year ago

@QuinnGT Thanks! Edited my original post, but this helped me deploy both the Kendra and the Opensearch stacks. PGVector still throws an error.

gonzobrandon commented 1 year ago

Same. for pgVector & version freeze (and git pull master to latest with freeze) container is > 10GB, rollback:

Lambda function AwsGenaiLllmChatbotStack-AuroraPgVectorDocumentInd-ybhM3EDcYGDg reached terminal FAILED state due to InvalidImage(SizeLimitExceeded
: Uncompressed container image size exceeds 10 GiB limit) 
jawhnycooke commented 1 year ago

Same issue as the above reports Lambda function AwsGenaiLllmChatbotStack-AuroraPgVectorDocumentInd-27OdlB7gdQ0g reached terminal FAILED state due to InvalidImage(SizeLimitExceeded: Uncompressed container image size exceeds 10 GiB limit) and failed to stabilize

schadem commented 12 months ago

I can confirm that changing both (like @Rionel-Dmello mentioned), the requrements.txt and the Dockerfile, fixes the issue for me as well

Okay so with the combination of @bigadsoleiman fix and also changing langchain==0.0.288 everywhere, I was able to deploy the Kendra and Opensearch stacks successfully. PGVector still fails with the size > 10 GB. Also changed the Dockerfile for Opensearch to the file below with @koushal2018 suggestion

FROM public.ecr.aws/lambda/python:3.11
RUN yum update -y && \
    yum install -y git wget gcc rustc rustup cargo libxml2-devel libxslt-devel libmagic-dev poppler-utils tesseract-ocr libreoffice pandoc && \
    yum clean all && \
    rm -rf /var/cache/yum
COPY dependencies/* ./
COPY requirements.txt ./
COPY index.py ./
RUN pip install --upgrade pip && \
    pip install --no-cache-dir unstructured[feature1,feature2] && \
    pip install --no-cache-dir -r requirements.txt --upgrade
RUN python -c "import nltk;nltk.download('punkt', download_dir='/home/sbx_user1051/nltk_data')" && \
    python -c "import nltk;nltk.download('averaged_perceptron_tagger', download_dir='/home/sbx_user1051/nltk_data')"
CMD ["index.lambda_handler"]
bigadsoleiman commented 12 months ago

We released a new version v.3.0.0 in which document ingestion is handled with AWS Batch instead of AWS Lambda to accomodate growing dependencies and avoid possible lambda timeout issues.

V2 won't be supported so we suggest to move to V3 for a more long term solution along with several new features around RAG.