Closed pflickin closed 5 years ago
Thanks Patrick, this is clever and solves the problem well.
Just for my education, for using a compatible Ubuntu-based custom image as base image, what is the benefit of copying ai4e_api_tools
over from the base Python image instead of downloading these from a blob storage or GitHub repo? The current method is very clean; I just wanted to know if there was other things from the base Python image that are used.
Let me know if you'd like me to do some testing.
I also don't think we mentioned how to adapt this example for GPU in the documentation (basically only two changes: installing tensorflow-gpu
of the desired version instead of tensorflow
and starting the docker container with nvidia-docker
for testing on a GPU equipped VM).
Wild Me likes the paradigm of having a fixed /ai4e_api_tools directory that stores all of the requirements as well as the installation scripts. It looks like most of the results of the telemetry is being stored in /app_insights_data, correct?
Is startup.sh required or can we expose a different ENTRYPOINT and CMD?
Where is the Azure BLOB configuration file(s) / envar(s) defined?
By: Jason Parham, Senior Computer Vision Research Engineer, Wild Me
Thanks for the review, Jason. To answer your questions:
@pflickin Ok, sounds good. Can we get a scaled down version of ./startup.sh that includes only the required tracing scripts? Is it possible to remove that requirement from startup.sh and use crontab with an @reboot trigger or some other systemd service + a config file?
@pflickin Ok, sounds good. Can we get a scaled down version of ./startup.sh that includes only the required tracing scripts? Is it possible to remove that requirement from startup.sh and use crontab with an @reboot trigger or some other systemd service + a config file?
@bluemellophone Other than the tracing, the only thing that's in the startup.sh is the supervisord call, which handles restarting failed services, etc. It's not required and provided as an example only. You are free to use whatever process control system you would like.
Thanks Patrick, this is clever and solves the problem well.
Just for my education, for using a compatible Ubuntu-based custom image as base image, what is the benefit of copying
ai4e_api_tools
over from the base Python image instead of downloading these from a blob storage or GitHub repo? The current method is very clean; I just wanted to know if there was other things from the base Python image that are used.Let me know if you'd like me to do some testing.
I also don't think we mentioned how to adapt this example for GPU in the documentation (basically only two changes: installing
tensorflow-gpu
of the desired version instead oftensorflow
and starting the docker container withnvidia-docker
for testing on a GPU equipped VM).
The libraries could be downloaded from blob, etc., but I aimed to keep the Dockerfiles consistent.
The nvidia details need to be added to the TF example, but let's do that in a subsequent PR.