This PR updates some README.md files and references to the Hugging Face PyTorch DLC for Inference, since those were pointing to the previous version of the container being us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cpu.2-2.transformers.4-41.ubuntu2204.py311 and us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-2.transformers.4-41.ubuntu2204.py311 for both CPU and GPU, respectively.
Now pointing to the latest us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cpu.2-2.transformers.4-44.ubuntu2204.py311 and us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-2.transformers.4-44.ubuntu2204.py311 for both CPU and GPU, respectively.
Additionally, this PR fixes some issues with the examples on how to run the PyTorch Inference containers locally, as the port was not properly set and those were using the Vertex AI port, and the default values for the environment values were missing.
Finally, this PR also updates the table with the published containers in the README.md, as well as fixing the formatting of the updated README.md files.
Description
This PR updates some
README.md
files and references to the Hugging Face PyTorch DLC for Inference, since those were pointing to the previous version of the container beingus-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cpu.2-2.transformers.4-41.ubuntu2204.py311
andus-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-2.transformers.4-41.ubuntu2204.py311
for both CPU and GPU, respectively.Now pointing to the latest
us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cpu.2-2.transformers.4-44.ubuntu2204.py311
andus-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-2.transformers.4-44.ubuntu2204.py311
for both CPU and GPU, respectively.Additionally, this PR fixes some issues with the examples on how to run the PyTorch Inference containers locally, as the port was not properly set and those were using the Vertex AI port, and the default values for the environment values were missing.
Finally, this PR also updates the table with the published containers in the
README.md
, as well as fixing the formatting of the updatedREADME.md
files.