Closed pflickin closed 5 years ago
Question: Previously we had an extra step of building a custom base image based on a desired based image, and then starting from the custom base image in the Dockerfile of a new API. Here, we include this step in the Dockerfile - is this the path forward from now on (I like this, no extra step to run elsewhere)?
Re: DockerFile
The port number exposed on the DockerFile needs to be changed from 8024 to 1212 to be consistent with supervisord.conf
Suggestion to insert the copying API code operation to the end of the copying operations, since all other files are from the template (just a nicer order IMO):
# Note: supervisor.conf reflects the location and name of your api code.
COPY ./supervisord.conf /etc/supervisord.conf
# startup.sh is a helper script
COPY ./startup.sh /
RUN chmod +x /startup.sh
COPY ./LocalForwarder.config /lf/
# Copy your API code
COPY ./animal-detector /app/animal-detector/
Re: runserver.py
Looks like the import statement from time import sleep
is unnecessary.
Seems that data = request.data
in function process_request_data
is not used.
Making some minor corrections to this comment (my version here - I can make a PR if easier):
# Use the AI4EService to execute your functions within a logging trace, which supports long-running/async functions,
# handles SIGTERM signals from AKS, etc., and handles concurrent requests.
Just curious: when "If the number of requests exceeds (changed from exceed) this limit, a 503 is returned to the caller", what is the error message - asking them to retry later?
In the decorator where content_types
is specified, is this checked against the post request's body or files that come with it? Sometimes, there are images etc uploaded while some other parameters are passed in the body's json. I'm manually checking the file types currently, which is sufficient. We should specify what this content_types
is actually checking (I'm guessing it's checking the request's body?).
Are all the fields in @ai4e_service.api_sync_func
required? Or is e.g. content_max_length
optional? I want to send customized error message for content length so I'm not planning on specifying it there.
Question: Previously we had an extra step of building a custom base image based on a desired based image, and then starting from the custom base image in the Dockerfile of a new API. Here, we include this step in the Dockerfile - is this the path forward from now on (I like this, no extra step to run elsewhere)?
- Yup, this is the new way.
Just curious: when "If the number of requests exceeds (changed from exceed) this limit, a 503 is returned to the caller", what is the error message - asking them to retry later? "Service is busy, please try again later"
In the decorator where
content_types
is specified, is this checked against the post request's body or files that come with it? Sometimes, there are images etc uploaded while some other parameters are passed in the body's json. I'm manually checking the file types currently, which is sufficient. We should specify what thiscontent_types
is actually checking (I'm guessing it's checking the request's body?). Just the body, for now.Are all the fields in
@ai4e_service.api_sync_func
required? Or is e.g.content_max_length
optional? I want to send customized error message for content length so I'm not planning on specifying it there. Only api_path and methods are required. I'll make a change in the quickstart doc.
The file
model/pipeline.config
is technically not needed for everything to work... All the information about the model is contained in the .pb model file - it might confuse some people. I have it in the other repo to keep a record. We could put a .gitkeep file in the /model folder to keep the folder?