CONDA IS NOT NEEDED AS A PACKAGE MANAGER. All setup is done using the Python Software Foundation recommended tools: virtualenv and pip and mainstream production tools Docker. Please see PEP 453 "officially recommend the use of pip as the default installer for Python packages"
GitHub Codespaces are FREE for education and as are GPU Codespaces as of this writing in December 2022
Things included are:
Makefile
Pytest
pandas
Pylint
or ruff
Dockerfile
GitHub copilot
jupyter
and ipython
Most common Python libraries for ML/DL and Hugging Face
githubactions
docker run -it --rm -p 8888:8888 -p 3000:3000 -p 3001:3001 bentoml/quickstart:latest
The following examples test out the GPU (including Docker GPU)
python utils/quickstart_pytorch.py
python utils/verify_cuda_pytorch.py
python utils/quickstart_tf2.py
nvidia-smi -l 1
it should show a GPU./utils/transcribe-whisper.sh
and verify GPU is working with nvidia-smi -l 1
lspci | grep -i nvidia
you should see something like: 0001:00:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] (rev a1)
Additionally, this workspace is setup to fine-tune Hugging Face
python hugging-face/hf_fine_tune_hello_world.py
Because of potential versioning conflicts between PyTorch and Tensorflow it is recommended to run Tensorflow via GPU Container and PyTorch via default environment.
See TensorFlow GPU documentation
Run docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu \ python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
Also interactively explore: docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu
, then when inside run:
apt-get update && apt-get install pciutils
then lspci | grep -i nvidia
To mount the code into your container: docker run --gpus all -it --rm -v $(pwd):/tmp tensorflow/tensorflow:latest-gpu /bin/bash
. Then do apt-get install -y git && cd /tmp
. Then all you need to do is run make install
. Now you can verify you can train deep learning models by doing python utils/quickstart_tf2.py
https://www.tensorflow.org/resources/recommendation-systems
# Deploy the retrieval model with TensorFlow Serving
docker run -t --rm -p 8501:8501 \
-v "RETRIEVAL/MODEL/PATH:/models/retrieval" \
-e MODEL_NAME=retrieval tensorflow/serving &
Used as the base and customized in the following Duke MLOps and Applied Data Engineering Coursera Labs: