Open VirajBagal opened 3 years ago
Make sure the transformers
version is updated. Old transformer version might not support that model.
This runs the docker file right? In the DockerFile, I have used the same transformers
image as you have used, i.e : FROM huggingface/transformers-pytorch-cpu:latest
. Container gets built successfully when done locally and also in GitHub actions. But somehow it gives the above error when 'Test'ed in AWS Lambda
When I used the week_8
docker image in the Lambda function and tested it, it worked. The image is the following one:
FROM amazon/aws-lambda-python
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG MODEL_DIR=./models
RUN mkdir $MODEL_DIR
ENV TRANSFORMERS_CACHE=$MODEL_DIR \
TRANSFORMERS_VERBOSITY=error
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
RUN yum install git -y && yum -y install gcc-c++
COPY requirements_inference.txt requirements_inference.txt
RUN pip install -r requirements_inference.txt --no-cache-dir
COPY ./ ./
ENV PYTHONPATH "${PYTHONPATH}:./"
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
RUN pip install "dvc[s3]"
# configuring remote server in dvc
RUN dvc init --no-scm
RUN dvc remote add -d model-store s3://models-dvc/trained_models/
# pulling the trained model
RUN dvc pull dvcfiles/trained_model.dvc
RUN python lambda_handler.py
RUN chmod -R 0755 $MODEL_DIR
CMD [ "lambda_handler.lambda_handler"]
I was getting error for the week_7
docker image in the Lambda function Test. The image was the following:
FROM huggingface/transformers-pytorch-cpu:latest
COPY ./ /app
WORKDIR /app
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
#this envs are experimental
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
# install requirements
RUN pip install "dvc[s3]"
RUN pip install -r requirements_inference.txt
# initialise dvc
RUN dvc init --no-scm
# configuring remote server in dvc
RUN dvc remote add -d model-store s3://models-dvc-viraj/trained_models/
RUN cat .dvc/config
# pulling the trained model
RUN dvc pull dvcfiles/trained_model.dvc
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
# running the application
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
I may nkt be understanding the issue with docker but from the error looks like the mod el does not exist in huggingfave eitger it was removed or the oath or model name are incorrect. To fix this try change model path in huggingface Search a model in huggingfave and use it
I am following Week 8 Blog post. When I deploy the container using Lambda and try to test it using the
Test
section, the Execution fails. I get the following log. Can you please help with this? Does this function already have internet access to download that model? (Sorry if the question is naive)